Standard Definitions


For a long time, survey researchers have needed more comprehensive and reliable diagnostic tools to understand the components of total survey error. Some of those components, such as margin of sampling error, are relatively easily calculated and familiar to many who use survey research. Other components, such as the influence of question-wording on responses, are more difficult to ascertain. Groves (1989) catalogues error into three other major potential areas in which it can occur in sample surveys. One is coverage, where error can result if some members of the population under study do not have a known nonzero chance of being included in the sample. Another is measurement effect, such as when the instrument or items on the instrument are constructed in such a way to produce unreliable or invalid data. The third is nonresponse effect, where nonrespondents in the sample that researchers originally drew differ from respondents in ways that are germane to the objectives of the survey.

Defining final disposition codes and calculating survey outcome rates is the topic for the Standard Definitions report. Often it is assumed — correctly or not — that the lower the response rate, the more question there is about the validity of the sample. Although response rate information alone is not sufficient for determining how much nonresponse error exists in a survey, or even whether it exists, calculating the rates is a critical first step to understanding the presence of this component of potential survey error. By knowing the disposition of every element drawn in a survey sample, researchers can assess whether their sample might contain nonresponse error and the potential reasons for that error.

With this report AAPOR offers a tool that can be used as a guide to one important aspect of a survey’s quality. It is a comprehensive, well-delineated way of describing the final disposition of cases and calculating outcome rates for surveys conducted by telephone (landline and cell), for personal interviews in a sample of households, for mail surveys of specifically named persons (i.e., a survey in which named persons are the sampled elements), and for Web surveys.

AAPOR urges all practitioners to use these standardized sample disposition codes in all reports of survey methods, no matter if the project is proprietary work for private sector clients or a public, government or academic survey. This will enable researchers to find common ground on which to compare the outcome rates for different surveys.

Read More

Revisions to the Standard Definitions

  • The first edition (1998) was based on the work of a committee headed by Tom W. Smith. Other AAPOR members who served on the committee include Barbara Bailar, Mick Couper, Donald Dillman, Robert M. Groves, William D. Kalsbeek, Jack Ludwig, Peter V. Miller, Harry O’Neill and Stanley Presser.
  • The second edition (2000) was edited by Rob Daves, who chaired a group that included Janice Ballou, Paul J. Lavrakas, David Moore, and Smith. Lavrakas led the writing for the portions dealing with mail surveys of specifically named persons and for the reorganization of the earlier edition. The group wishes to thank Don Dillman and David Demers for their comments on a draft of this edition.
  • The third edition (2004) was edited by Smith who chaired a committee of Daves, Lavrakas, Daniel M. Merkle and Couper. The new material on complex samples was mainly contributed by Groves and J. Michael Brick.
  • The fourth edition (2006) was edited by Smith, who chaired a committee of Daves, Lavrakas, Couper, Shap Wolf, and Nancy Mathiowetz. The new material on Internet surveys was mainly contributed by a subcommittee chaired by Couper with Lavrakas, Smith, and Tracy Tuten Ryan as members.
  • The fifth edition (2008) was edited by Smith, who chaired the committee of Daves, Lavrakas, Couper, Mary Losch and Brick. The new material largely relates to the handling of cell phones in surveys.
  • The sixth edition (2009) was edited by Smith, who chaired a committee of Daves, Lavrakas, Couper, Reg Baker and Jon Cohen. Lavrakas led the updating of the section on postal codes. Changes mostly dealt with mix-mode surveys and methods for estimating eligibility rates for unknown cases.
  • The seventh edition (2011) was edited by Smith who chaired the committee of Daves, Lavrakas, Couper, Timothy Johnson and Richard Morin. Couper led the updating of the section on Internet surveys and Sara Zuckerbraun drafted the section on establishment surveys.
  • ​The eighth edition (2015) was edited by Smith who chaired the committee of Daves, Lavrakas, Couper, and Johnson. The revised section on establishment surveys was developed by Sara Zuckerbraun and Katherine Morton. The new section on dual-frame telephone surveys was prepared by a sub-committee headed by Daves with Smith, David Dutwin, Mario Callegaro, and Mansour Fahimi as members.
  • The ninth edition was edited by Smith who chaired the committee of Daves, Lavrakas, Couper, Johnson, and Dutwin. The new section on mail surveys of unnamed person was prepared by a sub-committee headed by Dutwin with Couper, Daves, Johnson, Lavrakas, and Smith as members.
  • The tenth edition (2023) was edited by Ned English who chaired the committee of  Amaya, Berktold, Jackson, Kirzinger, Marlar, McPhee, and Nagle. Amaya and McPhee led the revision and update of dispositions for this new version and drove much of the restructuring. Additional support for this edition was provided by Kristen Olson, Ashley Hyon, Ben Philips, Stephen Immerwahr, and Clifford Young, and  P.J. Lugtig.


Download the full Standard Definitions Report (10th edition, 2023)

Download the Methods of Calculating Eligibility Rates (August, 2009)

Download the Response Rate Calculator V5.1 (Excel Spreadsheet – April, 2023)

Watch the February 2024 webinar on the New Standards Definitions