Numbers, Facts and Trends Shaping Your World

Muslim Americans: No Signs of Growth in Alienation or Support for Extremism

Survey Methodology

Muslim Americans constitute a population that is rare, dispersed, and diverse. It includes many recent immigrants from multiple countries with differing native tongues who may have difficulty completing a public opinion survey in English. The intense attention paid to Muslims in the aftermath of the 9/11 attacks and increased attention to Islamic extremism may have made them more reluctant to cooperate with a survey request from an unknown caller. Collectively, these characteristics present a significant challenge to anyone wishing to survey this population.

Despite the challenges, the Pew Research Center study was able to complete interviews with 1,033 Muslim American adults 18 years old and older from a probability sample consisting of three sampling frames. Interviews were conducted by telephone between April 14 and July 22, 2011 by the research firm of Abt SRBI. Interviews were conducted in English, Arabic, Farsi and Urdu. After taking into account the complex sample design, the average margin of sampling error on the 1,033 completed interviews with Muslims is +/- 5.0 percentage points at the 95% level of confidence. This section describes how the study was designed and executed.

Sample Design

In random digit dial (RDD) surveys of the English-speaking U.S. population, roughly one-half of one percent of respondents typically identify as Muslim in response to a question about religious tradition or affiliation (or about 5 out of every 1,000 respondents). This extremely low incidence means that building a probability sample of Muslim Americans is difficult and costly. The demographic diversity of the population – especially with respect to race and national origins – adds to the challenge. Moreover, analysis of the 2007 survey and other previous research indicates that the Muslim population is not concentrated in a few enclaves but is highly dispersed throughout the U.S. And since 2007 the proportion of people who can be reached only by cell phone has grown.

The sample design attempted to address the low incidence and dispersion of the Muslim American population, as well as the cell phone issue, by employing three sampling sources: an RDD landline sample, an RDD cell phone sample and a sample of previously identified Muslim households.

1. Landline RDD: The landline RDD frame was divided into five strata, four of which were based on the estimated density of the Muslim population in each county of the United States as determined through an analysis of Pew Research’s database of more than 260,000 survey respondents and U.S. Census Bureau data on ethnicity and language. To increase the efficiency of the calling, the lowest density stratum – estimated to be home to approximately 8%-19% of U.S. Muslims – was excluded. A disproportionate sampling strategy was employed to maximize the effective sample size from the other three geographic strata; a total of 131 interviews were completed in the three strata included. The fifth stratum was a commercial list of 608,397 households believed to include Muslims, based on an analysis of first and last names common among Muslims. This stratum yielded completed interviews with 501 respondents.

2. Cellular RDD: The cellular RDD frame was divided into the same four geographic strata as the landline RDD frame based on the estimated density of the Muslim population. As with the landline frame, the lowest density stratum was excluded in order to increase data collection efficiency. All Muslim adults reached in the cell sample were interviewed, regardless of whether or not they also had a landline. The fact that people with both types of phones had a higher chance of selection was adjusted for in the weighting as discussed below. The incidence rate of Muslim Americans was roughly three times higher in the cell frame than the landline frame (excluding the list stratum). A total of 227 interviews were completed in the cell RDD frame.

3. Recontact sample: In addition, a sample of previously identified Muslim households was drawn from Pew Research Center’s interview database and other RDD surveys conducted in recent years. This sample contained both landline and cell phone numbers. Recontacting these respondents from prior surveys yielded 174 completed interviews for this study.

The strength of this research design was that it yielded a probability sample. That is, each adult in the U.S. had a known probability of being included in the study. The fact that some persons had a greater chance of being included than others (e.g., because they live in places where there are more Muslims) is taken into account in the statistical adjustment described below.

RDD Geographic Strata

Pew Research Center surveys conducted in English (and some with a Spanish option) typically encounter about five Muslim respondents per 1,000 interviews, an unweighted incidence rate of 0.5%. The rate is also very similar to that encountered by other national surveys (for instance, see Tom Smith’s “The Muslim Population of the United States: The Methodology of Estimates” in Public Opinion Quarterly, Fall 2002). This low incidence means that the costs of building an RDD sample of Muslim Americans by screening a general public sample are prohibitive. Accordingly, it was necessary to develop alternative approaches that would allow for estimation of the probabilities of selection but increase the yield from screening.

An analysis of the geographic distribution of the Muslim population was undertaken, using several different sources of data. A key resource was the Pew Research Center database of more than 260,000 telephone interviews conducted between 2007 and 2011; it was used to estimate the density of Muslims in each U.S. county. Another resource was data from the American Community Survey (ACS), which is the U.S. Census Bureau’s replacement for the decennial census long form. The Census Bureau does not collect information about religion, but the ACS does include measures of ancestry, nationality for immigrants, and languages spoken. These measures were used to analyze the geographic distribution of adults who are from (or whose ancestors are from) countries with significant or majority Muslim populations, or who speak languages commonly spoken by Muslims. This yielded additional county-level estimates of the density of Muslims.

These measures were highly correlated and were used to sort counties into four different groups based on the estimated incidence of Muslims in each county. We refer to these mutually exclusive groups as the geographic strata. The lowest density stratum accounts for 8% of all Muslim interviews conducted by the Pew Research Center over the past five years; the second lowest accounts for 30% of Muslim interviews; the medium density stratum accounts for 38%; and the highest density stratum accounts for 24%. Drawing on the analysis of previous Pew Research surveys, ACS data, and the results of a pilot test, an optimal sampling allocation plan was developed for the RDD geographic strata. In total, 41,599 screening interviews in the RDD geographic strata were completed: 21% in the high density stratum, 52% in the medium density stratum and 27% in the low density stratum.

The lowest density stratum, which included 8% of all U.S. Muslims in Pew Research surveys (and up to 19% as based on estimates derived from ACS data), also includes 45% of the total U.S. population. As a practical matter, the analysis of the Pew Research database indicated that 15,000 screening interviews would have to be conducted in this stratum to yield an estimated 10 Muslim respondents. In order to put the study’s resources to the most efficient use, this stratum was excluded from the geographic strata of the RDD sample design, although persons living in these counties were still covered by the list stratum and recontact frame (a total of 113 interviews were completed in the lowest density areas from the list stratum and recontact frame).

List Stratum

Within the landline RDD frame of U.S. telephone numbers, a targeted, commercial list was used to identify 608,397 numbers that had a relatively high probability of belonging to a household with a Muslim adult. This list was defined as its own stratum within the landline RDD frame. This list was constructed from a commercial database of households where someone in the household has a name commonly found among Muslims. The list was prepared by Experian, a commercial credit and market research firm that collects and summarizes data from approximately 113,000,000 U.S. households. The analysis of names was conducted by Ethnic Technologies, LLC, a firm specializing in multicultural marketing lists, ethnic identification software, and ethnic data appending services. According to Experian, the analysis uses computer rules for first names, surnames, surname prefixes and suffixes, and geographic criteria in a specific order to identify an individual’s ethnicity, religion and language preference.

In 2011, Abt SRBI purchased Experian’s database of more than 608,000 households thought to include Muslims. This list consists of contact information, including telephone numbers. A test of the list, combined with the results of the screening interviews conducted in the course of the main survey, found that the Experian list was a highly efficient source for contacting Muslims; roughly three-in-ten households screened from the Experian list included an adult Muslim. The list does not, however, by itself constitute a representative sample of American Muslims. Muslims on the Experian list are somewhat better educated, more likely to be homeowners, more likely to be foreign born and of South Asian descent and much less likely to be African American or to have converted to Islam compared with Muslim Americans as a whole.

[CD-ROM]

Recontact Frame

In addition to contacting and interviewing a fresh sample of Muslim Americans, the phone numbers of all Muslim households from previous Pew Research surveys conducted between 2007 and 2011 were called. Adults in these households were screened and interviewed in the same manner used for the RDD samples. No attempt was made to reinterview the same respondent from earlier surveys. Pew Research’s survey partners, Abt SRBI and Princeton Survey Research Associates International (PSRAI), also provided lists of Muslims interviewed in the course of other national surveys conducted in recent years. In total, the recontact frame consisted of phone numbers for 756 Muslims (552 landline numbers and 204 cell phone numbers) interviewed in recent national surveys. From this frame, 262 households were successfully screened, resulting in 174 completed interviews with Muslims.

The greatest strengths of the recontact frame are that it consists entirely of respondents originally interviewed in the course of nationally representative surveys based on probability samples and that it includes respondents who live in the geographic stratum that was excluded from the landline and cell RDD samples. However, there also are certain potential biases of the recontact frame. Perhaps most obviously, all of the households previously interviewed in the recontact frame were interviewed in English, or for a small number, in Spanish. Another potential source of bias relates to the length of time between when respondents were first interviewed and the current field period; respondents still residing in the same household in 2011 as in an earlier year may represent a more established, less mobile population compared with those from households that could not be recontacted.

Analysis of the survey results suggests that there are some differences between Muslims in the recontact frame and those in the landline and cell RDD frames. For example, Muslims from the recontact frame are more likely to be a homeowner, less satisfied with national conditions, and less likely to have worked with others in their community to solve a problem compared with Muslims as a whole. These differences, however, are not sufficiently large so as to be able to substantially affect the overall survey’s estimates.

Questionnaire Design

As with the 2007 Muslim American survey, the goal of the study was to provide a broad description of the characteristics and attitudes of the Muslim American population. Thus, the questionnaire needed to cover a wide range of topics but be short enough that respondents would be willing to complete the interview.

Much of the content was drawn from the 2007 survey so that any changes in attitudes could be tracked. New questions also were taken from the Pew Research Center’s U.S. surveys and the Pew Global Attitudes Project’s surveys to provide comparisons with the U.S. public, U.S. Christians and Muslim publics in other countries.

Because this population includes many immigrants who have arrived in the U.S. relatively recently, the survey was translated and conducted in three languages (in addition to English) identified as the most common among Muslim immigrants — Arabic, Farsi and Urdu. Translation of the questionnaire was conducted by a professional translation service under the direction of Abt SRBI. A three-step process was used including translation by a professional translator, back translation to English by a second translator, followed by proofreading and review for quality, consistency and relevance. The translated questionnaires were independently reviewed by translators retained by the Pew Research Center, and revisions were made based on their feedback. A total of 925 interviews were conducted in English, 73 in Arabic, 19 in Farsi and 16 in Urdu.

Another issue confronted in the questionnaire design was the possibility that members of this population are reluctant to reveal their religious identification because of concerns about stereotyping and prejudice. Both the 2007 and 2011 surveys show that many Muslim Americans believe they are targeted by the government for surveillance and some also report personal experiences with discrimination and hostility. Several features of the questionnaire were tailored to deal with these concerns.

The initial questions were chosen to be of a general nature in order to establish rapport with respondents, asking about satisfaction with the community, personal happiness, and personal characteristics such as home ownership, entrepreneurship, and college enrollment. After these items, respondents were asked about their religious affiliation, choosing from a list that included Christian, Jewish, Muslim, Hindu, Buddhist or “something else.” Respondents who identified as Muslim proceeded to the substantive portion of the questionnaire, and those who were not Muslim were asked if anyone in the household practiced a different religion; in 39 households interviews were conducted with someone other than the person who was originally selected. If there was no Muslim in the household, the respondent was asked a short set of demographic questions to be used for weighting.

At this point in the interview, respondents were told that: “As mentioned before, this survey is being conducted for the Pew Research Center. We have some questions on a few different topics, and as a token of our appreciation for your time, we would like to send you $50 at the completion of this survey.” After this introduction, a series of questions followed (e.g., satisfaction with the state of the nation, presidential approval, civic involvement, everyday activities, opinions about political and social issues). At the conclusion of this series, respondents were told: “Just to give you a little more background before we continue, the Pew Research Center conducts many surveys on religion and public life in the United States. Earlier, you mentioned that you are a Muslim, and we have some questions about the views and experiences of Muslims living in the United States. I think you will find these questions very interesting.”

The logic for revealing the principal research focus of the study – a practice not common in survey research – was that respondents would quickly discover that the study was focused on Muslims and Islam, and that there would be a greater chance of establishing trust and rapport by revealing the intent of the study before asking questions specific to experiences as a Muslim or about the Islamic faith. Indeed, in initial pretesting of the 2007 study without the early presentation of the study’s purpose, some respondents expressed suspicion and eventually broke off the interview.

As was true with the 2007 survey, a high percentage of respondents identified in the screening interview as Muslim – 78% — eventually completed the survey. This completion rate is somewhat lower than average for other Pew Research Center surveys, where completion rates of 85% to 95% are more common. But given that the mean survey length was 32 minutes (12 minutes longer than the average survey conducted by the center), a somewhat higher-than-normal breakoff rate was not unexpected. The 78% completion rate does not include respondents who dropped off during the short screener interview prior to answering the religion question.

Pilot Test and Pretest

For the pilot test of selected questions from the survey, 97 interviews were completed with Muslim American adults sampled from the Experian list. The interviews were conducted March 10-13, 2011; interviews were conducted in English. Among households completing the screener, the Muslim incidence was 32%. The completion rate among qualified Muslims was 82%. The average interview length for pilot test interviews with Muslims was 14 minutes. Based on the results of the pilot test, a number of changes were made to the questionnaire and interviewer training procedures.

The pretest of the full survey resulted in 21 completed interviews with Muslim American adults sampled from the Experian list. The interviews were conducted March 31-April 3, 2011; interviews were conducted in English. Among households completing the screener, the Muslim incidence was 36%. The completion rate among qualified Muslims was 60%. The average interview length for pretest interviews with Muslims was 29 minutes. Additional changes were made to the questionnaire and interviewer training procedures based on the results of the pretest.

Survey Administration

The administration of this survey posed several challenges. For example, the volume of interviewing was very large. The survey firm that conducted the interviewing, Abt SRBI, devoted 24,500 interviewer hours to the study over a 14-week timeframe, with the bulk of this spent screening for this rare population. A total of 43,538 households were screened, with 706,945 unique phone numbers dialed over the field period. This was achieved by deploying 480 English-speaking and 12 foreign language-speaking interviewers.

Multilingual interviewers on staff were utilized for the project. Additional multilingual interviewers were recruited, first tested by an accredited vendor on their language proficiency then evaluated and scored before being interviewed and hired by Abt SRBI. All Non-English interviewers first go through the standard Abt SRBI initial training process that all interviewers go through. Bilingual interviewers with more proficiency and interviewing experience were given supervisory roles and worked with the interviewers in their language monitoring surveys, assisting in training and debriefing.

Building trust with respondents was critical for the survey’s success. For the landline RDD sample, fewer than 1 out of 200 households screened included a Muslim. This made it extremely important to minimize mid-interview terminations. Hence, it was important for all of the interviewers – Muslim and non-Muslim – to have experience in interviewing this population. To achieve this, all interviewers worked on the Experian list sample first; after having completed a few interviews with Muslim respondents, they were allowed to dial the landline and cell RDD geographic samples.

An incentive of $50 was offered to respondents near the beginning of the survey, after it was determined that the respondent identified as Muslim in a response to a question about religious affiliation. The decision to offer an incentive was based on two principal considerations. First, the survey entailed a substantial commitment of time for respondents. The mean length of an interview was approximately 32 minutes (considerably longer than the average of 20 minutes for other Pew Research Center surveys). And about 18% of the interviews lasted 40 minutes or longer. Second, incentives have been repeatedly shown to increase response rates, a critical consideration in studies of rare populations where substantial effort is devoted to locating qualified respondents.1 The use of incentives has been shown to be particularly helpful in improving participation among reluctant respondents. Most respondents (84%) provided a name and address information for receiving the incentive payment.

In addition, all qualified Muslim households and Muslim language barrier cases (Arabic, Urdu, Farsi) that were unable or unwilling to complete the interview during the initial calls were sent, where possible, a letter explaining the purpose and scope of the study. All language-barrier letters were translated into the respective languages. A total of 705 such letters were mailed.

To mitigate potential gender biases in the composition of the sample, the interviewing protocols for landline households attempted to match male interviewers with male respondents and female interviewers with female respondents. This practice is common among survey researchers conducting face-to-face interviews in majority Muslim nations. Interviewer/respondent gender matching was not implemented, however, when calling cell phone numbers because cell phones are predominantly used as a personal (rather than household) device.

The screening effort yielded a response rate of 22% for the geographic landline RDD sample, 20% for the cell RDD sample, 18% for the list sample, and 54% for the recontact sample, using the Response Rate 3 definition devised by the American Association for Public Opinion Research (AAPOR). Detailed AAPOR sample disposition reports are provided at the end of this section.

The completion rate for qualified Muslim respondents was 78% for the geographic landline RDD sample (excluding the list), 81% for the cell RDD sample, 74% for the list stratum of the RDD sample, and 90% for the recontact sample.

Weighting

Several stages of statistical adjustment (weighting) were needed to account for the use of multiple sampling frames and higher sampling rates in certain geographic areas. The first stage involved identifying all of the adults (Muslims and non-Muslims) who completed the screener in the landline (geographic + list strata) and cell RDD samples. These cases were adjusted, based on their probability of being sampled for the survey. This adjustment accounted for four factors: (1) the percent of telephone numbers that were sampled in the stratum; (2) the percent of telephone numbers sampled in the stratum for which eligibility as a working and residential number was not determined; (3) the percent of residential numbers that were completed screeners in the stratum; and, (4) the number of eligible adults in the household. This can be written as:

where 

is the number of telephone numbers in the frame in stratum

is the number of telephone numbers sampled, 

is the estimated number of working residential numbers among those with unknown eligibility, 

is the number of telephone numbers that are determined to be residential, 

is the number of completed screener interviews, and 

is the number of eligible adults in household i in stratum h.

The value of Ahi depended not just on the composition of the household but also on whether the number dialed was for a landline or a cell phone. For landline cases with no Muslim adults in the household, Ahi is simply the total number of adults in the household. For cell phone cases with no Muslims, however, no within-household selection was performed and so the Ahi adjustment equaled 1. For cell phone cases in which the person answering the phone was Muslim, there was also no within-household selection performed, and so the adjustment also equaled 1. In instances where the initial cell respondent was non-Muslim but reported that there was a Muslim adult in the household, one Muslim adult was randomly selected. The Ahi adjustment in these cases equaled the number of Muslim adults in the household. Similarly, for all landline cases in which there was at least one Muslim adult in the household, the Ahi adjustment equaled the number of Muslim adults in the household.

The probability of selection adjustment for recontact sample cases was computed differently. Recall that the recontacts are Muslim adults who live in households in which a Muslim had previously been interviewed for an unrelated survey conducted between 2007 and 2011. Each of these previous surveys was based on an independent, equal-probability national RDD sample. For weighting purposes, we assume that the population totals did not vary over the 2007-2011 time period. The base weighting for the recontact cases accounts for two factors: (1) the standardized weight from the previous survey and (2) the sample size of the previous survey. This can be written as

where 

is the standardized weight for respondent i in the previous survey and 

is the sample size of the previous survey in which the household participated. The standardized weights were computed by dividing the final weight for respondent i in the original survey by the average of the final weights in the original survey.

After the calculation of the base weights, the next step was to account for the overlap between the landline and cell RDD frames. Adults with both a residential landline and a cell phone (“dual service”) could potentially have been selected for the survey in both frames. The dual service respondents from the two frames were integrated in proportion to their effective sample sizes. The first effective sample size was computed by filtering on the dual service cases in the landline RDD sample (list + geographic strata) and computing the coefficient of variation (cv) of the final screener base weight. The design effect for these cases was approximated as 1+cv2 . The effective sample size (n1) was computed as the unweighted sample size divided by the design effect. The effective sample size for the dual service cases in the cellular RDD sample (n2) was computed in an analogous way. The compositing factor for the landline frame dual service cases was computed as n1/(n1 + n2). The compositing factor for the cellular frame dual service cases was computed as n2/(n1 + n2). Separately, we integrated the dual service cases in the recontact sample. The process for computing the compositing factor for these cases was analogous to the process described above for the fresh RDD plus Experian cases.

Once the landline and cell RDD samples were integrated, we sought to address the fact that adults living in counties assigned to the lowest density stratum had been excluded from the landline RDD and cellular RDD geographic samples. Whenever a substantial proportion of the population is not sampled due to expected low incidence of the target population, the method of adjusting the estimates to account for the exclusion is important and yet difficult because of the lack of data from the survey itself. To adjust for these exclusions, the base weights for the RDD geographic samples were adjusted differentially depending on whether the respondent was Muslim or non-Muslim.

The coverage factor for those who were not Muslim Americans was determined by examining the percentage of all adults in the excluded areas (44.6%) based on 2009 county-level figures from the Census Population Estimates Program. The adjustment for non-Muslim cases was 1/(1-.446)=1.81. The coverage adjustment for Muslim cases was compiled from several sources. According to 2005-2009 ACS counts of U.S.-born persons whose ancestors lived in predominantly Muslim countries, about 19.2% of Muslims live in the excluded areas. This is higher than the estimates based on ACS counts of persons born in predominantly Muslim countries (13.5%) and speaking Muslim languages (15.2%). Taking the most conservative estimate of 19.2% exclusion, the adjustment that we used for Muslim cases was 1/(1-.192)=1.24. The Experian list and recontact cases did not require coverage adjustment because they did not exclude any areas of the country.

The dual frame RDD sample of non-Muslims and Muslims was then balanced to control totals for the US adult population. The sample was balanced to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), and telephone usage. The basic weighting parameters came from a special analysis of the Census Bureau’s Current Population Survey’s 2010 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States. The cell phone usage parameter came from an analysis of the July-December 2010 National Health Interview Survey.2 After this calibration was performed, all the non-Muslim cases were dropped from the analysis.

The next step in the weighting process was to evaluate whether some Muslim adults were more likely to complete the survey than others. Specifically, we investigated the possibility that Muslim males were more likely to participate than Muslim females by using responses to questions about the total number of adult Muslim men and adult Muslim women in the household. We used this distribution, which was computed with a household-level weight, to develop an adjustment for propensity to respond by gender. The adjustment aligns the respondent sample to the roster-based distribution for gender as well as respondent reported data on education. Large-scale government surveys, which are the most common source for such population distribution estimates, do not collect data on religious affiliation. This realignment was sample-based, so it retained the variability in the estimates of the number and type of Muslims observed in the screening estimates.

After the dual frame RDD Muslim cases were calibrated to the US population controls and adjusted for residual nonresponse, we estimated control totals for the adult Muslim American population. We then calibrated the base weighted recontact sample to those estimated totals. This ensured that the totals for the categories of age, gender, education, race, Hispanic ethnicity, region, and phone service were consistent with the estimates from the dual frame RDD sample.

The recontact and combined RDD cases were then integrated in proportion to their effective sample sizes. The final weighted sample aligns with the sample-based totals for the Muslim American adult population. Had we simply added them together, they would have estimated twice the Muslim American population total. Rather than dividing the weights of both frames by 2 (equally weighting the samples), we used a factor that was proportional to the effective sample sizes. This worked out to be 0.858 for the dual frame RDD cases and 0.142 for the recontact cases.

Due to the complex design of the Muslim American study, formulas commonly used in RDD surveys to estimate margins of error (standard errors) are inappropriate. Such formulas would understate the true variability in the estimates. Accordingly, we used a repeated replication technique, specifically jackknife repeated replication (JRR), to calculate the standard errors for this study. Repeated replication techniques estimate the variance of a survey statistic based on the variance between sub-sample estimates of that statistic. The sub-samples (replicates) were created using the same sample design, but deleting a portion of the sample, and then weighting each sub-sample up to the population total. The units to be deleted were defined separately for each of the three samples (landline RDD, cell RDD, recontacts), and within each frame by the strata used in the sampling. A total of 100 replicates were created by combining telephone numbers to reduce the computational effort. A statistical software package designed for complex survey data, Stata v11, was used to calculate all of the standard errors and test statistics in the study.

Assessing Bias and Other Error

A key question in assessing the validity of the study’s findings is whether the sample is representative of the Muslim population. If Muslims who are difficult to locate or reluctant to be interviewed hold different opinions than those who are more accessible or willing to take part in the survey, a bias in the results could occur. For most well-designed surveys, nonresponse has not been shown to create serious biases because people who do not respond are similar to those who do on key measures in the survey. Whether that is true for the Muslim American population is difficult to determine. To assess this possibility, we compared respondents in households who completed the survey easily with respondents with whom it was more difficult to obtain a completed interview. Comparisons were made between respondents reached within the first few attempts and those who required substantially more attempts. Comparisons also were made between respondents in households where at least one attempt to interview was met with a refusal and those that never refused to participate. In effect, reluctant and inaccessible respondents may serve as a rough proxy for individuals who were never reached or never consented to be interviewed.

This analysis indicates that there are few significant differences between amenable and accessible respondents, on the one hand, and those who were harder to interview. Respondents who required more call attempts were somewhat more likely to be interviewed in one of the three foreign languages used in the study, an unsurprising result given the necessity to first identify a language barrier case and then to arrange a mutually convenient time for an Arabic, Farsi or Urdu-speaking interviewer to administer the interview. Perhaps related to this, harder to reach respondents were somewhat more likely to be born outside the U.S., to say they arrived in the U.S. after 1999 and to have a higher level of religious commitment. On the majority of questions in the survey, however, the differences between the hard to reach and other respondents were modest.

Nonresponse bias also can be assessed by comparing the opinions expressed early in the questionnaire by Muslims who did not complete the interview with the views of those who did complete the interview. About half of those who quit the interview did so in the first five minutes, prior to the point when the purpose of the study was revealed. Those who broke off were somewhat more likely to own their own home and to be self-employed or a small business owner. As is true in many surveys of the general public, those who broke off were somewhat less likely to report following what’s going on in government and public affairs “most of the time.” But on the available attitude questions for comparison, the differences were mostly small and non-systematic. All in all, the substantive views of those who did not complete the interview appear to be comparable to those who did.

Assessing Possible Sample Bias

The validity of studies of groups with large immigrant populations depends in part on the extent to which the sample accurately reflects the diversity of the countries of origin and languages spoken by the groups. Overall, this sample conformed closely to expectations based on government surveys.

Data from the 2009 American Community Survey (ACS) provides estimates of the proportion of all Americans born outside the U.S. In order to compare these estimates with the current survey, the analysis of the ACS data is based on respondents who speak English at least well or very well or who speak Arabic, Farsi or Urdu. Focusing on areas with large Muslim populations, the ACS estimates that 0.4% of the U.S. population were born in the Middle East or North Africa, 0.2% were born in Iran, 0.1% were born in Pakistan, and 0.8% were born in other South Asian countries. Overall, the screener interviews for this survey closely match these ACS estimates, indicating that the survey adequately covers the potential Muslim immigrant population.

Analysis of the survey in comparison to ACS data also suggests that people who speak Arabic or Farsi were screened at appropriate rates; those who speak Urdu were screened at rates slightly below what was expected. The ACS data suggest that of the U.S. population who speaks one of the four languages in which interviewing was conducted, 99.76% of the population speaks English very well, and 99.91% of the population speaks English well; by comparison, 99.79% of the screening interviews for this survey were conducted in English.

The ACS data estimate that between 0.05% and 0.13% of the target population speaks Arabic (and speaks English less than well or very well); 0.17% of screening interviews were done in Arabic. The ACS data estimate that between 0.03% and 0.07% of the population speaks Farsi (compared with 0.04% of screeners completed in Farsi), and that between 0.02% and 0.04% of the population speaks Urdu (compared with 0.01% of screeners completed in Urdu). These findings also indicate that the survey provided adequate coverage of these non-English speaking populations.

Finally, the ACS data make it possible to estimate the proportion of Muslims who do not speak English. Analysis suggests that between 83% and 93% of Muslims in the U.S. speak English well or very well, compared with between 4% and 10% who speak Arabic, 1-2% who speak Farsi, and 2-6% who speak Urdu. With the exception of a small underrepresentation of Urdu speakers, the weighted results of the survey line up closely with these projections.

Verifying Religious Affiliation

As an additional check on the quality of the data, a validation study was conducted to verify the religious preference of survey respondents. The study was fielded by Abt SRBI from June 2-July 24, 2011. A random subset of respondents was selected for the study among those who had completed the original survey in English, had accepted the incentive and were not part of the recontact sample who had completed a previous survey. Those selected were recontacted by telephone after they had received the incentive for their participation in the original survey. A total of 153 validation interviews were completed (82 by landline and 71 by cell phone). The validation rate for religious preference was 98%; only 3 of the 153 respondents to the validation study did not choose Muslim when asked about their religious affiliation (two chose a different religion and one refused to provide a response).

← Prev Page
1 3 4 5 6 7 8
Next Page →
  1. Church, A.H. 1993. “Incentives in Mail Survey: A Meta Analysis.” Public Opinion Quarterly 57:62-79. Singer, E., Van Hoewyk, J., and Maher, M.P. 2000. “Experiments with Incentives in Telephone Survey.” Public Opinion Quarterly 64:171-188. Brick. J.M., Montaquila, J., Hagedorn, M.C., Roth, S.B., and Chapman, C. 2005. “Implications for RDD Design from an Incentive Experiment.” Journal of Official Statistics 21:571-589.
  2. Blumberg SJ, Luke JV. “Wireless Substitution: Early Release of Estimates from the National Health Interview Survey, July-December, 2010. National Center for Health Statistics. June 2011.
Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information