Response Rates and Survey Quality
However, two factors have now undermined the role of the response rate as the primary arbiter of survey quality. Largely due to increasing refusals, response rates across all modes of survey administration have declined, in some cases precipitously. As a result, organizations have had to put additional effort into administration, thus making all types of surveys more costly. At the same time, studies that have compared survey estimates to benchmark data from the U.S. Census or very large governmental sample surveys have also questioned the positive association between response rates and quality. Furthermore, a growing emphasis on total survey error has caused methodologists to examine surveys – even those with acceptably high response rates–for evidence of nonresponse bias.
Results that show the least bias have turned out, in some cases, to come from surveys with less than optimal response rates. Experimental comparisons have also revealed few significant differences between estimates from surveys with low response rates and short field periods and surveys with high response rates and long field periods. (The difficulty of determining bias by comparing survey estimates to outside measurements, however, has led to ingenious strategies. One recent study developed an internal benchmark by using the 50/50 gender split of heterosexual, married couples to gauge the accuracy of survey estimates by gender among the respondents in six different surveys. )
There is currently no consensus about the factors that produce the disjuncture between response rates and survey quality. But the evidence does suggest several rules of thumb for consumers of survey reports and for researchers.
Researchers should always include in their survey reports the response rate, computed according to the appropriate AAPOR formula (Download AAPOR Response Rate Calculator here – Excel) or another similar formula fully described. Furthermore, several other measures of quality should become part of reports, especially when a response rate is low. On their side, consumers of survey results should treat all response rates with skepticism, since these rates do not necessarily differentiate reliably between accurate and inaccurate data. Instead consumers should pay attention to other indicators of quality that are included in reports and on websites, such as insignificant levels of bias, low levels of missing data, and conformity with other research findings.