79th Annual Idea Groups

Introducing: Idea Groups

We are thrilled to unveil an exciting addition to this year’s Annual Conference: Idea Groups. This innovative pre-conference program offers a unique opportunity for participants to engage in intimate, focused networking sessions through guided discussions.

Taking place from 2:00 pm – 5:00 pm on Tuesday, just before the official start of the conference on Wednesday, Idea Groups provide a platform for attendees to convene in smaller, more specialized gatherings. These gatherings aim to foster dynamic conversations around shared topics and questions that resonate within our AAPOR community.

The primary objective of Idea Groups is to create a conducive environment for informal exchanges, enabling members to delve deeper into pertinent issues and explore collaborative solutions. Whether you’re seeking insights, sharing experiences, or simply connecting with like-minded peers, Idea Groups offer a space tailored to your professional interests.

We encourage all attendees to take advantage of this valuable opportunity to enhance your conference experience and forge meaningful connections within our vibrant AAPOR community. Spots are limited and participation is limited to one Idea Group per attendee.

 

Register for an Idea Group

Emerging Survey Methods in Low- and Middle-Income Countries

Organizer: Abigail Greenleaf, Columbia University

This idea group will convene to share current approaches for survey work in low- and middle-income (LMIC) countries with a focus on emerging methods. Bringing this group together will create a space for those working in LMIC to ask questions, share experiences and imagine the future of survey work in LMIC. The discussion will be organized around the following questions:

  • What are some emerging methodological best practices?
  • Which survey topics in LMIC hold promise but have yet to be established?
  • How are we and can we create meaningful partnerships with survey colleagues in LMIC to continue to decolonize development work?
  • Where is the future of survey research in LMIC headed?
  • How can we establish a community of practice for researchers working on surveys in LMIC to share and identify opportunities for collaboration, experiences, research, and best practices?

Open Science, Open Code, and Open Data

Organizer: Emily Molfino, US Census Bureau

We are in a time of increased action to promote transparency and open science within and outside governments. The U.S. Office of Science and Technology Policy launched the Year of Open Science in 2023 to advance national open science policy, provide access to the results of the nation’s taxpayer-supported research, accelerate discovery and innovation, promote public trust, and drive more equitable outcomes. AAPOR’s Transparency Initiative is designed to promote methodological disclosure through a proactive, educational approach that assists survey organizations in developing simple and efficient means for routinely disclosing the research methods associated with their publicly released studies.

We hope to facilitate a roundtable discussion on open science, code, and data. We welcome researchers from the government, academia, and the private sector to discuss their experience with open science whether as producers or consumers of open science. Participants will be invited to share work by them (and their institution) on open science, open data, and open code, propose best practices, and discuss opportunities and challenges based on their own experience and/or their institution’s open science/data/code policies.

Some of these open science concepts and discussions will be illustrated with initial work for the Census Bureau’s Open Census program. Engaging and learning from the research and data community on open science principles and practices is important for the success of Open Census.

Questions for participants include but are not limited to:

  • What are the best practices (from academia, industry, and other stakeholder communities) in managing public access to research results, code, and data?
  • What are the biggest challenges to implementing open science policies, and how can these challenges be addressed?
  • What support will researchers need in an open science environment?
  • What is the desired end state with Open Census for consumers of Census Bureau research?
  • How can the Census Bureau monitor impacts on affected communities—authors and readers alike?
  • How can the Census Bureau ensure equity in publication opportunities?

Survey Costs: What do we know and what can we measure?

Organizer: James Wagner, University of Michigan

Survey costs are one of the most important factors driving design decisions and affecting survey errors in sample surveys. Although the field has made a lot of progress toward understanding errors, we know less about how costs actually are related to survey errors. Although empirical evaluations of survey experiments sometimes include a cost analysis, this analysis is typically limited to only variable costs of data collection and sometimes to only one part of data collection (e.g., incentive costs). This means that – as a field – we have virtually no cross-survey or cross-institutional information about the conditions under which it is “worth it” to spend more money on different parts of a design. A recent typology of survey costs (Olson, Wagner and Anderson 2021) proposed a variety of cost metrics to facilitate conversation among survey organizations about costs and errors without revealing proprietary cost information. We propose to use this framework to guide a discussion across institutions. Among the questions that will be posed for discussion are:

  • What kinds of cost recordkeeping systems are in place and on what time cadence are these records kept (e.g., expenses; timesheets; contact attempt records)?
  • What kinds of monetary or nonmonetary costs are sharable?
  • What evaluations have been done for potential errors in cost recordkeeping systems (e.g., errors in time sheets)?
  • To what extent are cost evaluations conducted, including comparing cost estimates to actual costs, allocation of budgets to fixed and variable costs, or variation in costs over different types of studies?
  • To what extent have organizations examined associations between different (monetary or nonmonetary) cost indicators?
  • To what extent and in what ways are organizations using cost models or statistical models to write cost estimates or predict changes in costs during the field period?

Survey Research Teaching Workshop

Organizer: Chase Harrison, Harvard University

Survey research is taught across many different universities and in many different departments and degree programs.  Although some survey research instructors teach in departments and programs focused on surveys and related areas, most are relatively isolated in their departments and universities.  This idea group will bring together diverse survey teachers to discuss strategies for teaching survey methods, including strategies for integrating survey instruction with different disciplines.

Topics will include:

  • Syllabus Review: Bring your syllabus, share it with the group, and get suggestions and feedback
  • Activities: Bring ideas for in-class and homework activities;
  • Instructional Challenges: Bring challenging instruction problems to the group and get help
  • Hybrid Teaching: Discuss experiments, successes, and failures with new instructional technologies such as flipped courses, asynchronous activities, and similar approaches
  • Integrating survey methods instruction with different fields
  •  AI and survey research teaching

The Future of Qualitative Research

Organizer: Darby Steiger, SSRS

Qualitative research methods have been evolving rapidly in recent years, especially in response to the COVID-19 pandemic, when focus groups and in-depth interviews shifted almost entirely to virtual methods. Since then, generative AI is now offering unprecedented access to tools that have potential to help us become more efficient in our qualitative research processes. For example, generative AI can perform tasks such as suggesting research hypotheses, drafting questions for discussion guides and recruitment screeners, producing synthetic respondents to test protocols, suggesting probes for participants, transcription, summarizing qualitative data, and offering recommendations. The market research industry has been rapidly adopting a number of these tools. However, these tools raise concerns within the public opinion research community that do not always align with our needs and obligations as social science researchers.

The purpose of this session is to discuss the benefits and downsides of incorporating various AI methods and other new tools into social science qualitative research, how our various organizations have been considering or testing these methods, and a shared conversation about considerations and implications for AAPOR, QUALPOR, and the future.

What can survey methodology / AI learn from AI / survey methodology?

Organizers: Stephanie Eckman, University of Maryland & Gina Walejko, Google

AI is a growing method for answering various research questions. Yet, most AI/ML researchers are not trained in data collection or survey methods. Similarly, few survey methodologists have explored the opportunities that AI and ML may offer the field of survey methodology.

In this idea group we will:

  1. Discuss how ML, AI, genAI, and other advanced technologies might change the way we approach survey design, administration, and analysis; and

  2. Discuss how survey methodology can improve the quality of training data, feedback data, generativeAI workflows, and machine model performance.

To answer the first question, we will discuss:

  • How might AI & ML disrupt the way we currently design, administer, and analyze surveys?

  • What AI applications are on the horizon that will be applied to the current way surveys and public opinion research are done? And, wIll any overcome current limitations associated with surveys?

  • What risks exist?

To answer the second question, we will discuss:

  • How might elements of the cognitive survey response process also apply to the process of labeling of training data and subsequently affect training data quality? For example, which cognitive biases are likely to impact model labeling?

  • What sampling strategies might make machine learning models more fair and more efficient? What sampling strategies should people apply when training their machine learning models?

  • How do we attract diverse labellers?

  • What have we learned from coding open-ended survey questions that can be applied to labeling tasks?

Based on both discussions, we will also discuss:

  • What new skills, education, and synergies that may be beneficial to survey and public opinion researchers and practitioners?

  • What training could our community offer to computer scientists and others engaged in AI / ML, and how do we get computer scientists interested?