Pre-election polls provide a valuable service to society by helping citizens and members of media organizations understand the choices people are making in elections. Surveys are the best tools we have to make sense of questions about how different groups of people are planning to vote, what kinds of issues matter to them, and how those considerations are changing over the course of an election campaign. But although polls are excellent for making sense of the parameters of an election, there are important limits to what polls can tell us about the results, especially before voting has taken place. Recognizing these limits is crucial for making sure that surveys are not abused to support problematic narratives. The frequently asked questions below represent a brief primer on election polling, its purpose and value, and what it can and cannot be used for.
Here is a brief list of frequently asked questions about understanding pre-election polling in the 2024 US Elections.
Pre-election polls can be helpful in thinking about what to expect on Election Day, but they are not predictions of the election outcomes. Polls provide a sense of what people would do in the election if they were asked to vote right now, using a group of individuals carefully selected to mirror the public. Although polls have often accurately reflected election trends, they can and sometimes do differ from eventual results because of sampling errors, shifts in voter preferences, difficulties in recruiting some groups of individuals to take polls, and the challenges in producing effective models of who is likely to vote. Despite these challenges, polling generally comes within a few percentage points of the final result when conducted properly. However, it’s important to note that polls are estimates, not forecasts, and should be interpreted with caution, especially when the race is close.
Several factors influence the accuracy of election polls, including:
- Sampling Methods: A poll’s representativeness depends on the sample, which should reflect the population in terms of demographics like age, race, gender, and geography. Non-representative samples can lead to skewed results.
- Response Rates: Declining response rates have made it more challenging to ensure that polls are representative. Pollsters use careful sampling and weighting to correct for this, but low response rates mean that even small differences in who responds to a poll can introduce bias.
- Turnout Predictions: Not every American that is eligible to do so actually votes. Figuring out who is likely to vote is a major challenge in election polling. Many pollsters attempt to model which respondents are likely to vote, but turnout can be unpredictable. Underestimating or overestimating voter turnout for certain groups can alter a poll’s estimates and undermine accuracy.
- Question Wording and Order: Subtle differences in how questions are asked can lead to variations in responses. For instance, most questions ask about how people would vote “if the election were held today,” but some ask people how they plan to vote or even who people think will win.
- Late Deciders and Changes in Opinion: Some voters make their decisions about whether and how to vote in the final days of the campaign or even on Election Day, which can shift outcomes after surveys were conducted.
In both the 2016 and 2020 elections, polls underestimated the support for Donald Trump in key battleground states. The American Association for Public Opinion Research (AAPOR) put together a task force in each of those years to assess the accuracy of the polls and to identify what could have been done to improve these results. These task forces conducted in-depth examinations of polling to see whether things like who responded to the polls and how the polls were conducted were related to the issues observed.
In 2016, the task force report found that national polls performed well, but that battleground state polls often missed the mark. They also found that the state polls would have done much better if they weighted the surveys to account for the fact that less-educated voters were less likely to respond. They also found that individuals who decided very late in the race tended to support Trump more than Clinton. In 2020, both state and national polls tended to overestimate Biden’s support ahead of the election. The issues in 2020 were less obvious, but the committee found some evidence suggesting that Trump supporters may have been a little less likely to respond to pollsters in the first place.
After 2016, election polls were much more likely to adopt education-based weighting. And polling firms are using a number of methods to try to account for these issues in 2024. Midterm polls in 2022 were historically quite accurate, and actually underestimated Democratic support, but it is not clear whether pollsters have addressed all sources of bias that might affect the 2024 contest. AAPOR has already established a committee to assess pre-election polling in 2024 and the committee will release a report sometime in early to mid-2025.
Right now, polling firms are using a number of different methods to assess how candidates are faring in elections. These include different sampling strategies (ways of finding possible voters, e.g. by sampling addresses or using online panels), different survey modes (ways of asking questions, e.g. online or by phone), different strategies for weighting the results to match the public, and different approaches to estimating who might be a likely voter. It is unclear whether there is some objective best strategy from among these choices or if they are all similarly accurate.
We do know that some strategies are not effective ways to find out who people are going to vote for. Polls that capture the preferences of people who visit particular websites or that gather information from individuals who volunteer to take surveys online without attempting to figure out how those samples relate to the larger population have no basis for making reliable estimates. Instead, polling firms need to engage in methods that carefully select respondents and consider how those respondents relate to the larger population to have a chance of understanding public opinion.
An important historical indicator has always been the willingness of polling firms to disclose the methods that they use for sampling, data collection, weighting, and likely voter modeling. This disclosure has been formalized as a part of the AAPOR Transparency Initiative.
The margin of error indicates the range within which the true value (such as a candidate’s level of support) likely falls. For example, if a poll shows Candidate A with 50% support and a margin of error of ±3%, the candidate’s true support could be anywhere between 47% and 53%.
It’s also important to note that the margin of error applies only to sampling error, not to other types of errors like nonresponse bias or incorrect turnout models. Lately, a good rule of thumb has been that the true potential for error in a poll is probably about twice as big as the reported margin of error.
Some websites like fivethirtyeight and RealClearPolitics present averages of poll results and sometimes even combine these with additional data to estimate how people will vote on election day. Averages have some benefits over individual poll results in that they tend to discount the everyday noise in polling results and provide a better picture of emerging trends. They are also likely to have a little less error than individual polls because the impacts of some errors are likely to differ between surveys
But although polling averages, for statistical reasons, are likely to be closer to the overall election results than individual polls, they may also be somewhat inaccurate, either because the distribution of polls that are being averaged has some particular bias or because of shifts in the electorate. One of the most difficult tasks for polling averages is figuring out how much error to expect in an average. In 2016, in particular, some polling averages reported implausibly low errors and incorrectly asserted that the election was a virtual lock for Hillary Clinton. Although polling aggregators have increasingly noted the possibility for errors in averages, there is still no agreed upon approach for determining how those errors should be calculated.
Although polls can be conducted poorly, intentionally biased polls are rare among reputable polling organizations. Polls with misleading results often stem from methodological flaws like biased sampling, leading questions, or poor weighting.
Transparency and rigor in polling can help both experts and the public identify potential issues and distinguish between high-quality polls and those that may be unreliable.
It is also possible for even well-conducted polls to bias polling averages if they are selectively released by polling firms. While non-partisan and media-sponsored surveys typically release all of their polling results, partisan sponsors and candidates may only release results when they bolster a campaign narrative. Public polling results from partisan sponsors or candidates should typically be taken with a grain of salt.
When looking at polls, voters should:
- Focus on trends across multiple polls rather than any single poll result.
- Consider the margin of error and whether a race is within that margin.
- Look for transparency in how the poll was conducted, including sampling methods and weighting.
- Understand that polls are estimates based on current data, not forecasts of future outcomes.
Polls are a valuable tool for understanding public sentiment, but they are not sufficiently precise for figuring out what is likely to happen in a very close election. It’s important to approach them with a critical eye and consider them as one piece of information among many.
Pollsters do not makeup results to tell a particular story. When survey firms collect data, they use that information to make the best possible estimate of what Americans are thinking. But there are a number of choices that survey firms can make that have implications for the conclusions they might reach, and some of these can end up shifting polls a bit in one direction or another. For instance, the fact that many state pollsters did not weight on education in 2016 led to an underestimate of Trump’s vote share, even though this had not happened previously. Choices like how to identify likely voters or whether and how to weight on partisanship and past voting behavior can similarly shape a poll’s projected electorate. While the goal of most polls is to make the most accurate possible estimate, these kinds of choices can lead to errors in some cases.
It is common for polls to differ modestly from election results, and the directions and magnitudes of these differences vay from election to election. This occurs both because of the inherent imprecision in polls and the dynamic nature of election contests. Unless there are very large differences between the results of public opinion polls and election tallies, polls cannot really tell us about the likelihood of any kind of foul play.
It may take some time to fully assess the accuracy of the 2024 pre-election polls. For one, the accuracy of polls needs to be compared to the final vote tallies both nationally and in each state. Although news organizations are often eager to call elections as quickly as possible on election night, some states accept ballots that are mailed on election day even if they arrive days or weeks later. For national results, this means that a full accounting of accuracy will not be possible immediately after the election. While we may get an inkling of the accuracy of polls for the states that tally their results quickly, in past elections it has not always been the case that different states experienced similar polling errors. It will also be important to assess any errors that do emerge to identify whether they are related to certain sets of choices that different polling firms make. For this reason, while an initial sense of accuracy is likely to take a few weeks, it will likely take until 2025 for us to have a thorough picture of any factors affecting the accuracy of the 2024 election polls.