Research suggests that the social interaction inherent in a telephone or in-person interview may exert subtle pressures on respondents that affect how they answer questions. The ways respondents change in response to these pressures is the basis of “mode studies” in social science.
For example, respondents may feel a need to present themselves in a more positive light when speaking to another human being rather than answers questions a computer, leading to an overstatement of socially desirable behaviors and attitudes and an understatement of opinions and behaviors they fear would elicit disapproval from another person. Previous research has shown this to be true in specific situations, with respondents understating such activities as drug and alcohol use and overstating activities like donating to charity or helping other people. This phenomenon is often referred to as “social desirability bias.” These effects may be stronger among certain types of people than others, introducing additional bias into the results.
The existence of mode-of-interview effects in survey research is well-documented.9
Pew Research Center published an extensive experiment on the subject in 2015, finding an average mode effect of about 5 percentage points across 60 questions on a wide array of topics. That study found evidence that very negative opinions about political figures are less likely to be expressed to an interviewer than in the relative anonymity of a self-administered online interview. In general, mode effects were more common on questions where respondents may have felt a need to present themselves in a more positive light to an interviewer. For example, some of the largest effects were observed on questions about experience with financial hardship. Low income respondents interviewed by phone were much less likely than those interviewed on the web to say they had trouble paying for food or affording medical care.
Mode effects are thought to be less common in surveys about politics, but there is evidence that they do occur.10
Respondents interviewed by another person may be somewhat more likely to attempt to present themselves as “a good citizen” who votes and keeps up with public affairs. And certain political opinions on sensitive topics like race may be subject to social desirability bias. Considerable research has found that the race of the interviewer can affect responses to questions about racially sensitive topics. Given controversies surrounding Donald Trump during his campaign for president, it is possible that some people may be reluctant to admit that they support him. If this is more likely to happen with a live interviewer than on the web, it might lead to telephone surveys understating support for him. It is difficult to know how sizeable this effect is, if it exists at all. A mode experiment by Morning Consult in December 2015 with Republicans in a non-probability sample found that Trump performed about 6 points better in a Republican nomination preference question online than in live telephone interviews. A second study using a similar methodology conducted with the general electorate in October 2016 found no overall mode effect in presidential vote intention among likely voters.
One clue might be found by looking at pre-election polls in 2016 and determining if surveys without interviewers were more accurate than those with interviewers in predicting support for Trump. The record is mixed on this, however. In the 2016 primaries, live telephone polls were at least as accurate – if not more so – than self-administered polls.11 In the general election, live phone seemed to perform not quite as well as interactive voice response (IVR) but better than online polls in several state elections. Because the sampling frames used by IVR and live interviewer polls were different, it is difficult to attribute the greater accuracy of IVR to their ability to obtain more honest responses from voters.
Similarly, the comparable level of accuracy of online and live-interviewer pre-election polls is not incontrovertible proof of the absence of a mode of interview effect, since the two types of polls rely of very different sampling methods. Online polls were conducted largely with non-probability samples and live interviewer polls used random samples (either RDD of landline and cellphones, or random samples of records drawn from voter databases).
The current study addresses these issues by drawing respondents from a common pool of adults in the ATP and randomly assigning them a mode of interview. Because a great deal is known about the panelists, it is possible to assess the comparability of the resulting online and telephone samples and weight them accordingly to ensure that any observed differences are a result of the mode of interview and nothing else.