Newsletters

Improving Trust in Surveys by Focusing on the Respondent User Experience

08/29/2024

Improving Trust in Surveys by Focusing on the Respondent User Experience

Kevin Collins, Survey 160

Many discussions of “trust in polling” focus on the perspective of consumers of polls: do people trust polls when they read top lines and crosstabs. But this is not the aspect of trust over which researchers have the most control. Rather, we should start by prioritizing trust among participants in survey research by focusing on improving the respondent user experience. Researchers ask respondents for three things: their private information (whether biographical information or viewpoints), their time, and in online surveys, their willingness to click on hyperlinks they may not recognize. Respondents must trust researchers enough to provide these things, and to earn that trust, as researchers we should minimize what they ask, and be honest about it.

First, we should not ask for personal information we don’t truly need. Our research at Survey 160 has shown that respondents do not like being asked to confirm their name. We lose more respondents refusing to confirm their name than we do limiting the match to birth year and gender. The match between respondents and phone numbers on augmented voter files are imperfect, with quality varying by voter file vendor and state. But by using list-based samples, and confirming with less-sensitive information that we are talking to the intended respondent, we can avoid asking respondents for sensitive personal information. Furthermore, when we do need to ask for personal information, we should give a reason why. We find marginally higher age and gender match rates when providing a reason why we were asking respondents their names (significant at p<0.1 level). In short, we need to limit what we ask of people, and when we do need that information, we must give reasons for asking.

Second, make surveys as short as possible, and be honest about the length. At Survey 160, our research has found that very short surveys get higher complete rates when we tell respondents it is short. Similarly, signposting near the end of a survey that there are only a few questions left can help increase completion rates. But lying about length is incredibly alienating, such as the too-common practice (that I have experienced personally) of phone interviewers telling respondents the survey is shorter than it is. Let’s ask less of respondents, and not lie to them about what we are asking of them.

Third, for text surveys, when the survey is short enough, we should not ask respondents to click a link to participate, but instead conduct interviews over text message. When asked directly, respondents typically prefer to answer a survey back and forth over text than to click on a link from a source they may not recognize. As a result, our research has shown that conducting a survey fully over text message produces higher response rates and lower costs for a sufficiently short survey. However, because most respondents can self administer a web instrument faster than a back and forth survey can be conducted, for a sufficiently long survey it is economical to field via text-to-web instead of live interviewer text message. But that doesn’t mean it is the respondents’ preference. So when possible, let’s not ask respondents to take unnecessary leaps of faith.

Building trust in surveys should start with building trust among respondents by improving their user experience. To do that, researchers should limit what we ask of survey participants, and be honest about those asks, and give reasons for what we ask. Because if respondents cannot trust researchers after participating in their research, we cannot expect them to trust the results of surveys either.