Around the world, pollsters have had some high-profile flops lately. In both the U.K. and Israel, pre-election polls earlier this year predicted much tighter races than actually occurred. Last year, Scots voted against independence by a wider-than-expected margin. In the U.S., many pollsters underestimated last year’s Republican midterm wave, and some observers have suggested that polls simply aren’t appropriate tools for studying certain subjects, such as religion.
Cliff Zukin, past president of the American Association for Public Opinion Research and a Rutgers University political science professor, wrote recently that “two trends are driving the increasing unreliability of election and other polling in the United States: the growth of cellphones and the decline in people willing to answer surveys.”
Despite those challenges, social scientists, market researchers, political operatives and others still rely on polls to find out what people are thinking, feeling and doing. But with response rates low and heading lower, how can survey researchers have confidence in their findings? Scott Keeter, director of survey research at Pew Research Center, addresses this and related questions below.
Do low response rates in and of themselves make a poll unreliable?
The short answer here is “no.” The potential for what pollsters call “nonresponse bias” – the unwelcome situation in which the people we’re not reaching are somehow systematically different from the people we are reaching, thus biasing our poll results – certainly is greater when response rates are low. But the mere existence of low response rates doesn’t tell us anything about whether or not nonresponse bias exists. In fact, numerous studies, including our own, have found that the response rate in and of itself is not a good measure of survey quality, and that thus far, nonresponse bias is a manageable problem.
For example, our 2012 study of nonresponse showed that despite declining response rates, telephone surveys that include landlines and cellphones and are weighted to match the demographic composition of the population (part of standard best practices) continue to provide accurate data on most political, social and economic measures. We documented this by comparing our telephone survey results to various government statistics that are gathered with surveys that have very high response rates. We also used information from two national databases that provide information about everyone in our sample – both respondents and non-respondents – to show that there were relatively small differences between people we interviewed and those we were unable to interview.
But it’s important to note that surveys like ours do have some biases. Better-educated people tend to be more available and willing to do surveys than are those with less education. Nonwhites are somewhat underrepresented. People who are interested in politics are more likely to take surveys that have to do with politics. But most of these biases can be corrected through demographic weighting of the sort that is nearly universally used by pollsters.
Are some kinds of biases harder to correct than others?
While weighting helps correct the overrepresentation of voters and the politically engaged, it does not eliminate it. This makes it especially important to have accurate ways of determining who is likely to vote in elections, a problem that all political pollsters grapple with.
The one other source of nonresponse bias that seems to persist after we apply demographic weighting is the tendency of survey participants to be significantly more engaged in civic activity than those who do not participate. People who participate in volunteer activities are more likely to agree to take part in surveys than those who don’t. This might lead us to overestimate things like the proportion of U.S. adults who contact elected officials, work with other people to solve community problems, or attend religious services on a weekly basis (though even in surveys with very high response rates, Americans report church-attendance rates that appear to substantially exceed actual attendance). Because of this, we try to be especially cautious in interpreting data about volunteer activity and related concepts. But fortunately, this characteristic of survey participants is not strongly related to most other things we study.
Survey response rates have been falling for many years. Why has this become of particular concern now?
One reason that there’s greater public awareness of falling response rates is because we and other researchers have been closely tracking the decline, constantly monitoring for impact and talking publicly about the issue. Our 2012 study of nonresponse documented the downward trend; at that time, we reported that the average response rate in 2012 was 9%, a figure that’s been widely cited since. There’s also been more discussion lately because of faulty election polls in the U.S. in 2014 and in Britain and Israel this year.
It’s important to keep in mind that even if there is more public discussion about the nonresponse issue now, it’s not a new concern among survey researchers. Scholars were noting the declines in response rates 25 years ago. We conducted our first major study of the impact of survey nonresponse in 1997, when our telephone response rates were 36%.
Do we know why fewer people are willing to respond to surveys than in years past?
The downward trend in response rates is driven by several factors. People are harder to contact for a survey now than in the past. That’s a consequence of busier lives and greater mobility, but also technology that makes it easier for people to ignore phone calls coming from unknown telephone numbers. The rising rate of outright refusals is likely driven by growing concerns about privacy and confidentiality, as well as perceptions that surveys are burdensome.
Does Pew Research Center see the same pattern of low/declining response rates in other countries?
Yes indeed. Nonresponse to surveys is growing in many wealthy nations, and for most of the same reasons it’s increasing here in the U.S.
Are low response rates the reason, or at least a big reason, why so many pollsters around the world seem to have missed the mark recently in their pre-election polls?
It’s not at all clear that nonresponse bias is to blame for the recent troubles with election polls, though that’s one possible source of the errors. Equally important may be the methods used to determine who is a likely voter, or how to deal with voters who tell pollsters that they are undecided in the race. The British Polling Council commissioned a review of the polls in the 2015 general election, following the failure of most polls there to forecast the Conservative victory. That review has not yet been completed.
How do response rates compare between calls to a landline phone and calls to a cellphone?
We are obtaining nearly identical response rates on landline phones and cellphones. However, it takes considerably more interviewer time to get a completed interview on a cellphone than a landline phone, because cellphone numbers have to be dialed manually to conform to federal law. In addition, many cellphones are answered by minors, who are ineligible for the vast majority of our surveys. Unlike a landline, we consider a cellphone a personal device and do not attempt to interview anyone other than the person who answers.
In general, how does Pew Research Center attempt to overcome the challenges posed by low response rates in its survey research?
Pew Research Center devotes considerable effort to ensuring that our surveys are representative of the general population. For individual surveys, this involves making numerous callbacks over several days in order to maximize the chances of reaching respondents and making sure that an appropriate share of our sample are interviewed on cellphones. We carefully weight our surveys to match the general population demographically.
Perhaps most importantly, Pew Research Center’s team of methodologists is engaged in ongoing research into improving our existing survey techniques while also looking at alternative ways of measuring the attitudes and behaviors of the public. As society continues to change and technology evolves, the future of social research is likely to involve some combination of surveys and other forms of data collection that don’t involve interviews. In the meantime, we continue to apply the best survey practices we can and endeavor to be as transparent as possible about the quality of our data and how we produce them.
For more information on the methodology behind our research, visit our Methods page.