AAPOR 81st Annual Conference Call for Abstracts

81st Annual AAPOR Conference

An LA Love Story of Data, Innovation, and the Quest for Truth  

May 13 – 15, 2026

Call for Papers, Posters, Panels, Roundtables, and Idea Groups

Submission Deadline: Wednesday, November 19th, 2025 by 11:59pm EST

 

AAPOR 2026, “An LA Love Story of Data, Innovation, and the Quest for Truth”, is dedicated to reaffirming the critical role of public opinion and survey research in society. In an era of shifting perceptions and evolving methodologies, the conference will focus on connecting our work to the broader public, rebuilding trust in data, and ensuring that insights from polling and survey research remain an essential pillar in informed decision-making. 

AAPOR 2026 will open with a thought-provoking keynote that will set the stage for a dynamic panel discussion with key AAPOR experts, delving into the challenges and opportunities of restoring confidence, embracing change and innovation, and communicating and maintaining relevance. 

To foster deeper conversations on key issues, AAPOR 2026 will feature five themes, each focusing on a critical area that shapes our field. As part of the conference, each theme will include a special half-day deep dive of invited sessions. 

While many of these talks will be invited, there will be room for abstracts to be selected for deep dive sessions during the abstract review process by the program committee.

If you do not already have an account on AAPOR.org, you must create one to begin a submission. You do not need to be a member of AAPOR to create an account. The link to create an account can be found here. When prompted to log in, please click ‘Set Up an Account’ and follow the prompts.

Space on the program is limited. There will be some individual submissions that cannot be integrated into a session and will, unfortunately, not be accepted. Authors of individual papers have a greater chance of acceptance if they are also willing to be considered for a methodological brief or poster.

Inquiries: All questions should be sent to rgreen@aapor.org.

Instructions to Submit your Abstract:

Methodological Briefs

Papers

Posters

Roundtables

Presentation Tracks:

AI: Data Science, Machine Learning, and Big Data 
Example Topics:  applications of supervised and unsupervised learning in public opinion research; total survey error detection and mitigation in algorithmic models; integration of big data sources with traditional survey data; predictive modeling for nonresponse, respondent behavior, and survey outcomes; use of administrative and transactional data for population insights; ethical considerations in predictive modeling, including algorithmic fairness. 

AI: LLMs, NLP, and Generative AI
Example Topics:  using LLMs to generate and refine survey questions; NLP techniques for coding open-ended responses, sentiment analysis, and topic modeling; evaluating the accuracy of generative AI in summarizing qualitative data; ai-driven interviewing and chat-bot assisted data collection; generation and evaluation of synthetic survey responses; ethical and transparency concerns in AI-generated insights. 

Attitudes and Opinions
Example Topics: Use of survey research to explore public opinion across a wide range of substantive issues. This includes attitudes on social and civil rights (racial justice, immigration, LGBTQ+ rights), public policy, and major societal changes like climate change or public health issues. This theme also covers broader issues related to justice, diversity, inclusion, and equity.   

Data Collection Methods, Field Operations, and Costs
Example Topics:  innovations in contact strategies; operational challenges in longitudinal panel maintenance and respondent tracking; field staff training and supervision models; budgeting, cost modeling, and cost-benefit analysis for different survey designs; use of automation and digital tools in field operations 

Data Collection – Modes and Muti-Mode
Example Topics:  mode effects in mixed-mode survey designs; transitioning from interviewer-administered surveys to web-based surveys; device effects (mobile vs. desktop) on survey completion and data quality; sequential vs. concurrent mode deployment strategies; mode-specific nonresponse and measurement error; hybrid designs for hard-to-reach populations. 

Elections, Politics, and Media
Example Topics: voting behavior among diverse communities; drivers of vote preference; election poll methods; polling accuracy; voter files; exit polling; presidential approval. 

Market ResearchExample Topics:Case studies using synthetic respondents; identifying fraudulent respondents; visualizing findings for C-level presentations; advertising/concept testing (quick turn or traditional); using Omnibus surveys to manage multiple internal stakeholders; estimating market/mind share; using psychographic segmentation to create advertising/marketing strategies (and application to government programs); Qualitative research that helps understand the ‘why’; applying market research techniques and data collection methods to government research questions or data needs; best practices for training new hires; Market Research Panels addressing: job opportunities, transitioning from the public to private sector, transitioning from the vendor to client side; using AI to quickly (less than one hour) answer stakeholder requests.

Multicultural, Multilingual, and Multinational Research
Example Topics: substantive findings from 3MC surveys; methodological issues in 3MC surveys. 

Probability and Nonprobability Samples, Frames, and Coverage Errors
Example Topics: sampling frames; sampling techniques; comparison of probability and nonprobability samples; administrative data coverage properties. 

Qualitative Research
Example Topics: methodological insights from or about qualitative research methods; in-depth interviewing methods; focus groups; qualitative content analyses; mixed methods data collection; qualitative research among diverse communities. 

Questionnaire Design and Interviewing
Example Topics: questionnaire design or formatting; visual design; interviewer effects; cognitive interviewing; response times; question characteristics. 

Research in Practice
Example Topics: data visualization; data security; writing successful RFPs; survey management; increasing the talent pipeline for public opinion research among diverse communities; other practical issues regarding survey data collection 

Response Rates and Nonresponse Error
Example Topics: Nonresponse rates; nonresponse error; nonresponse-related paradata; adaptive and responsive design; incentive experiments; differential response patterns among diverse communities. 

Statistical Techniques and Estimation
Example Topics: weighting and estimation; imputation; small-area estimation; Bayesian modeling; multi-level regression and post-stratification; variance 

AAPOR 2026 will feature five core themes, each anchored by a half-day Director’s Cut Session with extended invited talks designed to spark meaningful dialogue and cross-disciplinary insight: 

1. Survey Methods and Data Science – This track explores how data science methods (including machine learning, automation, and predictive modeling) compare to traditional survey research in both design and evaluation, particularly in the context of total survey error. Researchers are using data science to enhance survey error detection, assess data quality, and model respondent behavior, introducing new efficiencies while also raising important questions about bias, validation, and methodological rigor. AAPOR experts have played a key role in evaluating these frameworks, contributing to the broader conversation about integrating modern data science techniques into traditional survey methodologies.

2. Nonprobability and Probability Sampling – Probability sampling has long been the gold standard in survey research because it ensures every individual in a population has a known chance of being selected, leading to more representative and reliable results. However, declining response rates have made it increasingly challenging to achieve high-quality probability samples, pushing researchers to explore alternative approaches. Nonprobability methods offer faster, more cost-effective data collection but raise concerns about bias and generalizability.

3. Large Language Models and Qualitative Methods – Qualitative research has long relied on human interpretation to analyze open-ended responses, interview transcripts, and text-based data. Large language models are now being used to assist with tasks like text classification, sentiment analysis, and automated coding, offering efficiencies that help researchers process vast amounts of qualitative data. While these models introduce new capabilities, they do not replace human expertise. Instead, they can complement traditional qualitative approaches by uncovering broad patterns at scale while researchers provide the contextual nuance and ethical oversight needed for meaningful interpretation.

4. Representation and Dissemination – Leveraging Small Domain Estimation and cross-cultural research to ensure data reflects local contexts and lived experiences – Traditional statistical methods often rely on broad population averages, but these figures can feel disconnected from the lived experiences of individuals and communities. This track examines how researchers are working to make statistics more meaningful, ensuring that data resonates with the populations it intends to represent. As survey samples shrink and response rates decline, statistical modeling, data integration, and innovative estimation techniques are being used to produce localized, contextually relevant insights.

5. The Relevance of Polling, Official Statistics, and Public Trust – Polling, national surveys, and official statistics are more than just numbers, they are a barometer of truth, a foundation for democratic decision-making, and a public service that helps shape the future. These data sources inform policies, guide institutions, and ensure that governments remain accountable to the people they serve. But for them to fulfill their purpose, the public must not only understand them but also care about them. When data feels disconnected from real experiences, trust declines, and the ability to drive meaningful change weakens.

Please consider the following when submitting your abstract:

Scheduling Conflicts

To minimize scheduling conflicts, the abstract submission forms will ask you to list any known conflicts during the conference dates. While we make every attempt to accommodate the scheduling conflicts, please be advised that in some instances, it may not be possible to avoid them.  

Audio‐Visual Equipment

All meeting rooms will have projectors, screens, microphones, and laptops. The presenting author is responsible for bringing a digital copy of their presentation materials to the conference. 

Confirmation

Submitters will receive automatic email confirmations of their submissions within five minutes of final submission. Submitters who do not receive this confirmation should log back onto the submission site to verify their submission was entered correctly. The submitter is the primary contact person and is responsible for notifying all other authors/presenters of acceptance, rejection, scheduling, and any other relevant abstract information AAPOR provides.  

 

Review Process 

Each submission will be reviewed by at least two peer reviewers, and a final team of volunteers dubbed the Program Committee who assist the Conference Chair and Associate Conference Chair with making final decisions about abstracts for the specific presentation track and session format. We expect to send acceptance notifications mid-February 2026.  

AAPOR Code of Ethics

All submissions that present original survey data must abide by the AAPOR Code of Professional Ethics and Practices by reporting, at a minimum, the information specified in Section III‐A of the AAPOR Code. Further, proposals should communicate work that authors expect to reach an acceptable completion stage before the conference (e.g., by the end of April 2026).