Standards and Ethics

Jump To:

AAPOR believes in upholding the highest standards for our profession, with our members agreeing to abide by the AAPOR Code of Professional Ethics and Practices.

Our goals are to support sound and ethical practice in the conduct of survey and public opinion research and in the use of such research for policy- and decision-making in the public and private sectors, as well as to improve public understanding of survey and public opinion research methods and the proper use of those research results. The Code describes the obligations that we believe all research professionals have, regardless of their membership in this Association or any other, to uphold the credibility of survey and public opinion research.

AAPOR also provides information on working with Institutional Review Boards, details best practices, various working examples, disclosure FAQs and more. Much of AAPOR’s work is in developing and promoting resources that help researchers meet demanding standards. See the list to the right for other materials that may assist in your work.

AAPOR Code of Professional Ethics and Practices

(Revised April 2021) Download PDF

The Code of Professional Ethics and Practices

We—the members of the American Association for Public Opinion Research (AAPOR) and its affiliated chapters—subscribe to the principles expressed in this document, the AAPOR Code of Professional Ethics and Practices (“the Code”). Our goals are to support sound and ethical practice in the conduct of public opinion and survey research and promote the informed and appropriate use of research results.

The Code is based in fundamental ethical principles that apply to the conduct of research regardless of an individual’s membership in AAPOR or any other organization. Adherence to the principles and actions set out in the Code is expected of all public opinion and survey researchers.

As AAPOR members, we pledge to maintain the highest standards of scientific competence, integrity, accountability, and transparency in designing, conducting, analyzing, and reporting our work, and in our interactions with participants (sometimes referred to as respondents or subjects), clients, and the users of our research. We pledge to act in accordance with principles of basic human rights in research. We further pledge to reject all tasks or assignments that would require activities inconsistent with the principles of this Code.

The Code sets the standard for the ethical conduct of public opinion and survey research at the time of publication. Recommendations on best practices for research design, conduct, analysis, and reporting are beyond the scope of the Code but may be published separately by AAPOR Executive Council.

Definitions of Terms Used in the Code

1. “Public opinion and survey research” refers to the systematic collection and analysis of information from or about individuals, groups, or organizations concerning their behaviors, cognitions, attitudes or other characteristics. It encompasses both quantitative and qualitative research methods, traditional or emerging.

2. “Participants” refers to individuals whose behaviors, cognitions, attitudes, or other characteristics are measured and analyzed. Participants can include individuals representing groups or organizations, and individuals such as minors or those unable to consent directly, for whom a parent, legal guardian, or other proxy makes participation decisions or provides information.

3. “Personally identifiable information” refers to (i) measurements, records, or other data that can be used alone or in combination to distinguish or trace an individual’s identity and (ii) any other information that is linkable to an individual (e.g., employment information, medical history, academic records).

I. Principles of Professional Responsibility in Our Research

  1. We will avoid practices or methods that may harm, endanger, humiliate, or unnecessarily mislead participants and potential participants.
  2. We will not misrepresent the purpose of our research or conduct other activities (such as sales, fundraising, or political campaigning) under the guise of conducting research.
  3. We recognize that participation in our research is voluntary except where specified by regulation or law. Participants may freely decide, without coercion, whether to participate in the research, and whether to answer any question or item presented to them.
  4. We will make no false or misleading claims as to a study’s sponsorship or purpose and will provide truthful answers to participants’ questions about the research. If disclosure of certain information about the research could endanger or cause harm to persons, could bias responses, or does not serve research objectives, it is sufficient to indicate, in response to participants’ questions about the research, that some information cannot be revealed.
  5. We recognize the critical importance of protecting the rights of minors and other vulnerable individuals when obtaining participation decisions and conducting our research.
  6. We will act in accordance with laws, regulations, and the rules of data owners (providers of research or administrative records previously collected for other purposes) governing the collection, use, and disclosure of information obtained from or about individuals, groups, or organizations.
  1. We recognize the right of participants to be provided with honest and forthright information about how personally identifiable information that we collect from them will be used.
  2. We recognize the importance of preventing unintended disclosure of personally identifiable information. We will act in accordance with all relevant best practices, laws, regulations, and data owner rules governing the handling and storage of such information. We will restrict access to identifiers and destroy them as soon as they are no longer required, in accordance with relevant laws, regulations, and data owner rules.
  3. We will not disclose any information that could be used, alone or in combination with other reasonably available information, to identify participants with their data, without participant permission.
  4. When disclosing personally identifiable data for purposes other than the current research, we will relay to data users any conditions of their use specified in the participant permission we have obtained.
  5. We understand that the use of our research results in a legal proceeding does not relieve us of our ethical obligation to protect participant privacy and keep confidential all personally identifiable data, except where participants have permitted disclosure.
  1. When undertaking work for a client, we will hold confidential all proprietary information obtained about the client and about the conduct and findings of the research undertaken for the client, except when the dissemination of the information is expressly authorized by the client.
  2. We will inform those (partners, co-investigators, sponsors, and clients) for whom we conduct publicly released research studies about AAPOR’s Standards for Disclosure in Section III of the Code, and provide information on what should be disclosed in their releases.
  3. We will be mindful of the limitations of our expertise and capacity to conduct various types of research and will accept only those research assignments that we can reasonably expect to accomplish within these limitations.
  1. We will disclose to the public the methods and procedures used to obtain our own publicly disseminated research results in accordance with Section III of the Code.
  2. We will correct any errors in our own work that come to our attention which could influence interpretation of the results. We will make good faith efforts to identify and issue corrective statements to all parties who were presented with the factual misrepresentation or distortions. If such factual misrepresentations or distortions were made publicly, we will correct them in a public forum that is as similar as possible to original data dissemination.
  3. We will correct factual misrepresentations or distortions of our data or analysis, including those made by our research partners, co-investigators, sponsors, or clients. We will make good faith efforts to identify and issue corrective statements to all parties who were presented with the factual misrepresentations or distortions, and if such factual misrepresentations or distortions were made publicly, we will correct them in a public forum that is as similar as possible. We also recognize that differences of opinion in the interpretation of analysis are not necessarily factual misrepresentations or distortions and will exercise professional judgment in handling disclosure of such differences of opinion.
  1. We recognize the importance to the science of public opinion and survey research of disseminating as freely as practicable the ideas and findings that emerge from our research.
  2. We can point with pride to our membership in AAPOR and adherence to the Code as evidence of our commitment to high standards of ethics in our relations with research participants, our clients or sponsors, the public, and the profession. However, we will not cite our membership in the Association nor adherence to this Code as evidence of professional competence, because the Association does not certify the professional competence of any persons or organizations.

II. Principles of Professional Practice in the Conduct of Our Work

  1. We will recommend and employ only those tools and methods of analysis that, in our professional judgment, are fit for the purpose of the research questions.
  2. We will not knowingly select research tools and methods of analysis that yield misleading conclusions.
  3. We will not knowingly make interpretations of research results that are inconsistent with the data available, nor will we tacitly permit such interpretations. We will ensure that any findings we report, either privately or for public release, are a balanced and accurate portrayal of research results.
  4. We will not knowingly imply that interpretations are accorded greater confidence than the data warrant. When we generalize from samples to make statements about populations, we will only make claims of precision and applicability to broader populations that are warranted by the sampling frames and other methods employed.
  5. We will not engage in data fabrication or falsification.
  6. We will accurately describe and attribute research from other sources that we cite in our work, including its methodology, content, comparability, and source.

III. Standards for Disclosure

Broadly defined, research on public opinion can be conducted using a variety of quantitative and qualitative methodologies, depending on the research questions to be addressed and available resources. Accordingly good professional practice imposes the obligation upon all public opinion and survey researchers to disclose sufficient information about how the research was conducted to allow for independent review and verification of research claims, regardless of the methodology used in the research. Full and complete disclosure for items listed in Section A will be made at the time results are released, either publicly or to a research client, as the case may be. As detailed below, the items listed in Section B, if not immediately available, will be released within 30 days of any request for such materials. If the results reported are based on multiple samples or multiple modes, the preceding items (as applicable) will be disclosed for each.

  1. Data Collection Strategy: Describe the data collection strategies employed (e.g. surveys, focus groups, content analyses).
  1. Who Sponsored the Research and Who Conducted It. Name the sponsor of the research and the party(ies) who conducted it. If the original source of funding is different than the sponsor, this source will also be disclosed.
  1. Measurement Tools/Instruments. Measurement tools include questionnaires with survey questions and response options, show cards, vignettes, or scripts used to guide discussions or interviews. The exact wording and presentation of any measurement tool from which results are reported as well as any preceding contextual information that might reasonably be expected to influence responses to the reported results and instructions to respondents or interviewers should be included. Also included are scripts used to guide discussions and semi-structured interviews and any instructions to researchers, interviewers, moderators, and participants in the research. Content analyses and ethnographic research will provide the scheme or guide used to categorize the data; researchers will also disclose if no formal scheme was used.
  1. Population Under Study. Survey and public opinion research can be conducted with many different populations including, but not limited to, the general public, voters, people working in particular sectors, blog postings, news broadcasts, an elected official’s social media feed. Researchers will be specific about the decision rules used to define the population when describing the study population, including location, age, other social or demographic characteristics (e.g., persons who access the internet), time (e.g., immigrants entering the US between 2015 and 2019). Content analyses will also include the unit of analysis (e.g., news article,  social media post) and the source of the data (e.g., Twitter, Lexis-Nexis).
  1. Method Used to Generate and Recruit the Sample. The description of the methods of sampling includes the sample design and methods used to contact or recruit research participants or collect units of analysis (content analysis).
    1. Explicitly state whether the sample comes from a frame selected using a probability-based methodology (meaning selecting potential participants with a known non-zero probability from a known frame) or if the sample was selected using non-probability methods (potential participants from opt-in, volunteer, or other sources).
    2. Probability-based sample specification should include a description of the sampling frame(s), list(s), or method(s).
      1. If a frame, list, or panel is used, the description should include the name of the supplier of the sample or list and nature of the list (e.g., registered voters in the state of Texas in 2018, pre-recruited panel or pool).
      2. If a frame, list, or panel is used, the description should include the coverage of the population, including describing any segment of the target population that is not covered by the design.
    3. For surveys, focus groups, or other forms of interviews, provide a clear indication of the method(s) by which participants were contacted, selected, recruited, intercepted, or otherwise contacted or encountered, along with any eligibility requirements and/or oversampling.
    4. Describe any use of quotas.
    5. Include the geographic location of data collection activities for any in-person research.
    6. For content analysis, detail the criteria or decision rules used to include or exclude elements of content and any approaches used to sample content. If a census of the target population of content was used, that will be explicitly stated.
    7. Provide details of any strategies used to help gain cooperation (e.g., advance contact, letters and scripts, compensation or incentives, refusal conversion contacts) whether for participation in a survey, group, panel, or for participation in a particular research project. Describe any compensation/incentives provided to research subjects and the method of delivery (debit card, gift card, cash).
  1. Method(s) and Mode(s) of Data Collection. Include a description of all mode(s) used to contact participants or collect data or information (e.g., CATI, CAPI, ACASI, IVR, mail, Web for survey; paper and pencil, audio or video recording for qualitative research, etc.) and the language(s) offered or included. For qualitative research such as in-depth interviews and focus groups, also include length of interviews or the focus group session.
  1. Dates of Data Collection. Disclose the dates of data collection (e.g., data collection from January 15 through March 10 of 2019). If this is a content analysis, include the dates of the content analyzed (e.g., social media posts between January 1 and 10, 2019).
  1. Sample Sizes (by sampling frame if more than one frame was used) and (if applicable) Discussion of the Precision of the Results.
    1. Provide sample sizes for each mode of data collection (for surveys include sample sizes for each frame, list, or panel used).
    2. For probability sample surveys, report estimates of sampling error (often described as “the margin of error”) and  discuss whether or not the reported sampling error or statistical analyses have been adjusted for the design effect due to weighting, clustering, or other factors.
    3. Reports of non-probability sample surveys will only provide measures of precision if they are defined and accompanied by a detailed description of how the underlying model was specified, its assumptions validated, and the measure(s) calculated.
    4. If content was analyzed using human coders, report the number of coders, whether inter-coder reliability estimates were calculated for any variables, and the resulting estimates.
  1. How the Data Were Weighted. Describe how the weights were calculated, including the variables used and the sources of the weighting parameters.
  1. How the Data Were Processed and Procedures to Ensure Data Quality. Describe validity checks, where applicable, including but not limited to whether the researcher added attention checks, logic checks, or excluded respondents who straight-lined or completed the survey under a certain time constraint, any screening of content for evidence that it originated from bots or fabricated profiles, re-contacts to confirm that the interview occurred or to verify respondent’s identity or both, and measures to prevent respondents from completing the survey more than once. Any data imputation or other data exclusions or replacement will also be discussed. Researchers will provide information about whether any coding was done by software or human coders (or both); if automated coding was done, name the software and specify the parameters or decision rules that were used.
  1. A General Statement Acknowledging Limitations of the Design and Data Collection. All research has limitations and researchers will include a general statement acknowledging the unmeasured error associated with all forms of public opinion research.
  1. Procedures for managing the membership, participation, and attrition of the panel, if a pool, panel, or access panel was used. This should be disclosed for both probability and non-probability surveys relying on recruited panels of participants.
  2. Methods of interviewer or coder training and details of supervision and monitoring of interviewers or human coders. If machine coding was conducted, include a description of the machine learning involved in the coding.
  3. Details about screening procedures, including any screening for other surveys or data collection that would have made sample or selected members ineligible for the current data collection (e.g., survey, focus group, interview) will be disclosed (e.g., in the case of online surveys if a router was used).
  4. Any relevant stimuli, such as visual or sensory exhibits or show cards. In the case of surveys conducted via self-administered computer-assisted interviewing, providing the relevant screen shot(s) is strongly encouraged, though not required.
  5. Summaries of the disposition of study-specific sample records so that response rates for probability samples and participation rates for non-probability samples can be computed. If response or cooperation rates are reported, they will be computed according to AAPOR Standard Definitions. If dispositions cannot be provided, explain the reason(s) why they cannot be disclosed, and this will be mentioned as a limitation of the study.
  6. The unweighted sample size(s) on which one or more reported subgroup estimates are based.
  7. Specifications adequate for replication of indices or statistical modeling included in research reports.

Reflecting the fundamental goals of transparency and replicability, AAPOR members share the expectation that access to datasets and related documentation will be provided to allow for independent review and verification of research claims upon request. In order to protect the privacy of individual respondents, such datasets will be de-identified to remove variables that can reasonably be expected to identify a respondent. Datasets may be held without release for a period of up to one year after findings are publicly released to allow full opportunity for primary analysis. Those who commission publicly disseminated research have an obligation to disclose the rationale for why eventual public release or access to the datasets is not possible, if that is the case.

If any of our work becomes the subject of a formal investigation of an alleged violation of this Code, undertaken with the approval of the AAPOR Executive Council, we will provide additional information on the research study in such detail that a fellow researcher would be able to conduct a professional evaluation of the study.

Best Practices for Survey Research

updated March 2022

Below you will find recommendations on how to produce the best survey possible. Included are suggestions on the design, data collection, and analysis of a quality survey.  For more detailed information on important details to assess rigor of survey methology, see the AAPOR Transparency Initiative.

To download a pdf of these best practices, please click here

"The quality of a survey is best judged not by its size, scope, or prominence, but by how much attention is given to [preventing, measuring and] dealing with the many important problems that can arise."

“What is a Survey?”, American Statistical Association

1. Planning for your survey

Surveys are an important research tool for learning about the feelings, thoughts, and behaviors of groups of individuals. However, surveys may not always be the best tool for answering your research questions. They may be appropriate when there is not already sufficiently timely or relevant existing data on the topic of study. Researchers should consider the following questions when deciding whether to conduct a survey:

  • What are the objectives of the research? Are they unambiguous and specific?
  • Have other surveys already collected the necessary data?
  • Are other research methods such as focus groups or content analyses more appropriate?
  • Is a survey alone enough to answer the research questions, or will you also need to use other types of data (e.g., administrative records)?

Surveys should not be used to produce predetermined results, campaigning, fundraising, or selling. Doing so is a violation of the AAPOR Code of Professional Ethics.

Once you have decided to conduct a survey, you will need to decide in what mode(s) to offer it. The most common modes are online, on the phone, in person, or by mail.  The choice of mode will depend at least in part on the type of information in your survey frame and the quality of the contact information. Each mode has unique advantages and disadvantages, and the decision should balance the data quality needs of the research alongside practical considerations such as the budget and time requirements.

  • Compared with other modes, online surveys can be quickly administered for less cost. However, older respondents, those with lower incomes, or respondents living in rural areas are less likely to have reliable internet access or to be comfortable using computers. Online surveys may work well when the primary way you contact respondents is via email. It also may elicit more honest answers from respondents on sensitive topics because they will not have to disclose sensitive information directly to another person (an interviewer).
  • Telephone surveys are often more costly than online surveys because they require the use of interviewers. Well trained interviewers can help guide the respondent through questions that might be hard to understand and encourage them to keep going if they start to lose interest, reducing the number of people who do not complete the survey. Telephone surveys are often used when the sampling frame consists of telephone numbers. Quality standards can be easier to maintain in telephone surveys if interviewers are in one centralized location.
  • In-person, or face-to-face, surveys tend to cost the most and generally take more time than either online or telephone surveys.  With an in-person survey, the interviewer can build a rapport with the respondent and help with questions that might be hard to understand. This is particularly relevant for long or complex surveys. In-person surveys are often used when the sampling frame consists of addresses.
  • Mailed paper surveys can work well when the mailing addresses of the survey respondents are known. Respondents can complete the survey at their own convenience and do not need to have computer or internet access. Like online surveys, they can work well for surveys on sensitive topics. However, since mail surveys cannot be automated, they work best when the flow of the questionnaire is relatively straightforward. Surveys with complex skip patterns based on prior responses may be confusing to respondents and therefore better suited for other modes.

Some surveys use multiple modes, particularly if a subset of the people in the sample are more reachable via a different mode. Often, a less costly method is employed first or used concurrently with another method, for example offering a choice between online and telephone response, or mailing a paper survey with a telephone follow-up with those who have not yet responded.

2. Designing Your Sample

When you run a survey, the people who respond to your survey are called your sample because they are a sample of people from the larger population you are studying, such as adults who live in the U.S. A sampling frame is a list of information that will allow you to contact potential respondents – your sample – from a population. Ultimately, it’s the sampling frame that allows you to draw a sample from the larger population. For a mail-based survey, it’s a list of addresses in the geographic area in which your population is located; for an online panel survey, it’s the people in the panel; for a telephone survey, it’s a list of phone numbers. Thinking through how to design your sample to best match the population of study can help you run a more accurate survey that will require fewer adjustments afterwards to match the population.

One approach is to use multiple sampling frames; for example, in a phone survey, you can combine a sampling frame of people with cell phones and a sampling frame of people with landlines (or both), which is now considered a best practice for phone surveys.

Surveys can be either probability-based or nonprobability-based. For decades, probability samples, often used for telephone surveys, were the gold standard for public opinion polling. In these types of samples, there is a frame that covers all or almost all the population of interest, such as a list of all the phone numbers in the U.S. or all the residential addresses, and individuals are selected using random methods to complete the survey. More recently, nonprobability samples and online surveys have gained popularity due to the rising cost of conducting probability-based surveys. A survey conducted online can use probability samples, such as those recruited using residential addresses, or can use nonprobability samples, such as “opt-in” online panels or participants recruited, through social media or personal networks. Analyzing and reporting nonprobability-based survey results often require using special statistical techniques and taking great care to ensure transparency about the methodology.

3. Designing Your Questionnaire

  • Questions should be specific and ask only about one concept at a time. For example, respondents may interpret a question about the role of “government” differently – some may think of the federal government, while others may think of state governments.
  • Write questions that are short and simple and use words and concepts that the target audience will understand. Keep in mind that knowledge, literacy skills, and English proficiency vary widely among respondents.
  • Keep questions free of bias by avoiding language that pushes respondents to respond in a certain way or that presents only one side of an issue. Also be aware that respondents may tend toward a socially desirable answer or toward saying “yes” or “agree” in an effort to please the interviewer, even if unconsciously.
  • Arrange questions in an order that will be logical to respondents but not influence how they answer. Often, it’s better for general questions to come earlier than specific questions about the same concept in the survey. For example, asking respondents whether they favor or oppose certain policy positions of a political leader prior to asking a general question about the favorability of that leader may prime them to weigh those certain policy positions more heavily than they otherwise would in determining how to answer about favorability.
  • Choose whether a question should be closed-ended or open-ended. Closed-ended questions, which provide a list of response options to choose from, place less of a burden on respondents to come up with an answer and are easier to interpret, but they are more likely to influence how a respondent answers. Open-ended questions allow respondents to respond in their own words but require coding in order to be interpreted quantitatively.
  • Response options for closed-ended questions should be chosen with care. They should be mutually exclusive, include all reasonable options (including, in some cases, options such as “don’t know” or “does not apply” or neutral choices such as “neither agree nor disagree”), and be in a logical order. In some circumstances, response options should be rotated (for example, half the respondents see response options in one order while the other half see it in reverse order) due to an observed tendency of respondents to pick the first answer in self-administered surveys and the last answer in interviewer-administered surveys. Randomization allows researchers to check on whether there are order effects.
  • Consider what languages you will offer the survey in. Many U.S. residents speak limited or no English. Most nationally representative surveys in the U.S. offer both English and Spanish questionnaires, with bilingual interviewers available in interviewer-administered modes.
  • See AAPOR’s resources on question wording for more details

If you want to measure change, don’t change the measure.

To accurately measure whether an observed change between surveys taken at two points in time reflects a true shift in public attitudes or behaviors, it is critical to keep the question wording, framing, and methodology of the survey as similar as possible across the two surveys. Changes in question wording and even the context of other questions before it can influence how respondents answer and make it appear that there has been a change in public opinion even if the only change is in how respondents are interpreting the question (or potentially mask an actual shift in opinion).

Changes in mode, such as comparing a survey conducted over the telephone to one conducted online, can sometimes also mimic a real change because many people respond to certain questions differently when speaking to an interviewer on the phone versus responding in private to a web survey. Questions that are very personal or have a response option that respondents see as socially undesirable, or embarrassing are particularly sensitive to this mode effect.

If changing the measure is necessary — perhaps due to flawed question wording or a desire to switch modes for logistical reasons — the researcher can employ a split-ballot experiment to test whether respondents will be sensitive to the change. This would involve fielding two versions of a survey — one with the previous mode or question wording and one with the new mode or question wording — with all other factors kept as similar as possible across the two versions. If respondents answer both versions similarly, there is evidence that any change over time is likely due to a real shift in attitudes or behaviors rather than an artifact of the change in measurement. If response patterns differ according to which version respondents see, then change over time should be interpreted cautiously if the researcher moves ahead with the change in measurement.

  • Follow your institution’s guidance and policies on the protection of personal identifiable information and determine whether any data privacy laws apply to the study. If releasing individual responses in a public dataset, keep in mind that demographic information and survey responses may make it possible to identify respondents even if personal identifiable information like names and addresses are removed.
  • Surveys that ask questions about mental health, sexual assault or other trauma, discrimination, substance abuse, or other sensitive topics may pose unique risks to respondents. Consider taking one or more of the following steps:
    • Consult an Institutional Review Board for recommendations on how to mitigate the risk, even if not required by your institution.
    • Disclose the sensitive topic at the beginning of the survey, or just before the questions appear in the survey, and inform respondents that they can skip the questions if they are not comfortable answering them (and be sure to program an online survey to allow skipping, or instruct interviewers to allow refusals without probing).
    • Provide links or hotlines to resources that can help respondents who were affected by the sensitive questions (for example, a hotline that provides help for those suffering from eating disorders if the survey asks about disordered eating behaviors).
  • Build rapport with a respondent by beginning with easy and not-too-personal questions and keeping sensitive topics for later in the survey.
  • Keep respondent burden low by keeping questionnaires and individual questions short and limiting the number of difficult, sensitive, or open-ended questions.
  • Allow respondents to skip a question or provide an explicit “don’t know” or “don’t want to answer” response, especially for difficult or sensitive questions. Requiring an answer increases the risk of respondents choosing to leave the survey early.

4. Fielding your survey

Interviewers need to undergo training that covers both recruiting respondents into the survey and administering the survey. Recruitment training should cover topics such as contacting sampled respondents and convincing reluctant respondents to participate. Interviewers should be comfortable navigating the hardware and software used to conduct the survey and pronouncing difficult names or terms. They should have familiarity with the concepts the survey questions are asking about and know how to help respondents without influencing their answers. Training should also involve practice interviews to familiarize the interviewers with the variety of situations they are likely to encounter. If the survey is being administered in languages other than English, interviewers should demonstrate language proficiency and cultural awareness. Training should address how to conduct non-English interviews appropriately.

Interviewers should be trained in protocols on how best to protect the health and well-being of themselves and respondents, as needed. As an example, during the COVID-19 pandemic, training in the proper use of personal protective equipment and social distancing would be appropriate for field staff.

Before fielding a survey, it is important to pretest the questionnaire. This typically consists of conducting cognitive interviews or using another qualitative research method to understand respondents’ thought processes, including their interpretation of the questions and how they came up with their answers. Pretesting should be conducted with respondents who are similar to those who will be in the survey (e.g., students if the survey sample is college students).

Conducting a pilot test to ensure that all survey procedures (e.g., recruiting respondents, administering the survey, cleaning data) work as intended is recommended. If it is unclear what question wording or survey design choice is best, implementing an experiment during data collection can help systematically compare the effects of two or more alternatives.

Checks must be made at every step of the survey life cycle to ensure that the sample is selected properly, the questionnaire is programmed accurately, interviewers do their work properly, information from questionnaires is edited and coded accurately, and proper analyses are used. The data should be monitored while it is being collected by using techniques such as observation of interviewers, replication of some interviews (re-interviews), and monitoring of response and paradata distributions. Odd patterns of responses may reflect a programming error or interviewer training issue that needs to be addressed immediately.

It is important to monitor responses and attempt to maximize the number of people who respond to your survey. If very few people respond to your survey, there is a risk that you may be missing some types of respondents entirely, and your survey estimates may be biased. There are a variety of ways to incentivize respondents to participate in your survey, including offering monetary or non-monetary incentives, contacting them multiple times in different ways and at different times of the day, and/or using different persuasive messages. Interviewers can also help convince reluctant respondents to participate. Ideally, reasonable efforts should be made to convince both respondents who have not acknowledged the survey requests as well as those who refused to participate.

5. Analyzing and reporting the survey results

Analyzing survey data is, in many ways, similar to data analysis in other fields. However, there are a few details unique to survey data analysis to take note of. It is important to be as transparent as possible, including about any statistical techniques used to adjust the data.

Depending on your survey mode, you may have respondents who answer only part of your survey and then end the survey before finishing it. These are called partial responses, drop offs, or break offs. You should make sure to indicate these responses in your data and use a value to indicate there was no response. Questions with no response should have a different value than answer options such as “none of the above,” “I don’t know,” or “I prefer not to answer.” The same applies if your survey allows respondents to skip questions but continue in the survey.

A common way of reporting on survey data is to show cross-tabulated results, or crosstabs for short. Crosstabs are when you show a table with one question’s answers as the column headers and another question’s answers as the row names. The values in the crosstab can be either counts — the number of respondents who chose those specific answers to those two questions — or percentages. Typically, when showing percentages, the columns total to 100%.

Analyzing survey data allows us to estimate findings about the population under study by using a sample of people from that population. An industry standard is to calculate and report on the margin of sampling error, often shortened to the margin of error. The margin of error is a measurement of confidence in how close the survey results are to the true value in the population. To learn more about the margin of error and the credibility interval, a similar measurement used for nonprobability surveys, please see AAPOR’s Margin of Error resources.

Ideally, the composition of your sample would match the population under study for all the characteristics that are relevant to the topic of your survey; characteristics such as age, sex, race/ethnicity, location, educational attainment, political party identification, etc. However, this is rarely the case in practice, which can lead to the results of your survey being skewed. Weighting is a statistical technique to adjust the results to adjust the relative contributions of your respondents to match the population characteristics more closely. To learn more about weighting, please see AAPOR’s Weighting resources.

Because there are so many different ways to run surveys, it’s important to be transparent about how a survey was run and analyzed so that people know how to interpret and draw conclusions from it. AAPOR’s Transparency Initiative has established a list of items to report with your survey results that uphold the industry transparency standards. These items include sample size, margin of sampling error, weighting attributes, the full text of the questions and answer options, the survey mode, the population under study, the way the sample was constructed, recruitment, and several other details of how the survey was run. The list of items to report can vary based on the mode of your survey — online, phone, face to face, etc. Organizations who want to commit to upholding these standards can also become members of the Transparency Initiative.