Scott Keeter, Director of Survey Research

Presidential Address delivered by Scott Keeter
Director of Survey Research, Pew Research Center
at the
67th Annual Conference of the American Association for Public Opinion Research
Orlando, Florida
May 18, 2012

(The views expressed in this speech are those of the author).

I am honored to have this opportunity to address the 67th annual conference of AAPOR, an organization I have loved since I first joined in 1986.

For those of us in the business of studying the attitudes, behaviors and experiences of the public, these are the best of times and the worst of times.

Never before in history has so much information about the public been so readily available for us to study and analyze. The world is getting flatter, traditional authorities are losing power, and people are gaining the ability to organize and act without a hierarchy to propel and guide them. In this changing world, information about what people do, what they think and what they want is more critical than ever.

But the institutions dedicated to measuring what the public thinks, experiences and does are undergoing significant change – and much of this change is not good. At the same time, our trusted methods for gathering information are encountering serious challenges, as we all know too well.

Increasing political polarization also poses a threat, as information increasingly comes to be seen through the red and blue filters of our partisan world. Perceptions of the condition of the economy by Republicans and Democrats did not differ during the 1990s, but began to diverge in the next decade and are now quite wide. Republicans rated the economy better than Democrats during George W. Bush’s time in office, and the pattern has reversed since Barack Obama became president.

And more generally, disagreeable or inconvenient information is heavily discounted by many people. When the information comes from survey research, the producers of these surveys face increasingly hostile attacks.1

This is not a new phenomenon, but it’s arguably worse today than it was many years ago, facilitated by greater consumer choice in information sources, partisan news media, and a large and sophisticated infrastructure created to generate and disseminate information helpful to the ideological and financial interests of those who pay for it.

None of this is news to you. Among many other voices, AAPOR presidents over the past couple of decades or more have been talking about these issues in one form or another. I would like to offer a few ideas about how we as survey researchers and as AAPOR can respond.

I also want to remind us of a principle that is at the core of AAPOR, the shared point of view that unites a fairly disparate group of academics, policy practitioners, methodologists, political pollsters and others.

A unifying and foundational principle of our profession, and of AAPOR, is that the generation of unbiased information and data about the population is critical to the health of our democracy. Democracy has many meanings, but common to them is the connection between the people and those who hold power in the society. Though often imperfect in conception and implementation, democracy implies that people are equal. And so, biased information about the public weakens the connection at the core of democracy.

Indeed, this core commitment to unbiased data, I would argue, is one of the main reasons why AAPOR continues to attract such a diverse group of researchers: public opinion pollsters, health researchers, sociologists and demographers, and those who focus mainly on the craft and science of drawing samples, designing questionnaires and collecting data.

What are the threats?

When we think about the threats to our profession, many of us probably think first about growing non-response, non-coverage and other methodological challenges we work constantly to overcome. These are formidable. We probably used the term formidable years ago when response rates headed well below the norms we were trained to expect. Depending on the sector in which you worked, that meant below 50 percent, or 40 percent, or 30 percent, or lower. Today, many of us in the public opinion sector are in the single digits using conservative rules for estimating the rates.

On the matter of how well we cover the population of interest, even in the fall of 2004, I got dozens of questions about the potential of bias from cell phone-only households in presidential election polls. Cell-only households were 7 percent of all households then. They are probably five times that now. We know there is a potential for bias now.

But tough as these are, they are pretty familiar, so I’d like to focus on some other threats that are equally dangerous, and less amenable to the scientific solutions we have tried to apply to the growing wireless-only population and high refusal rates.

For some of us old timers in AAPOR, the very technologies and methods talked about in this conference constitute a threat to our way of life. Whether we are talking about opt-in internet panels, which have been around for a while, or the non-survey methods such as automated content coding of social media, the integration of data from what has been called the “internet of things,” and from so-called big data more generally, these have drawn interest and resources away from traditional surveys. I will come back to this shortly.

Another is the financial struggle of American journalism, the institution that synthesizes much of the information we produce and provides it to policymakers and the public. News organizations are also producers of survey research or clients of organizations that do. As my colleagues in the Project for Excellence in Journalism have documented, journalism does not have an audience problem – it has a money problem. Even as the audience for mainstream news organizations has remained stable or even grown, revenues have plummeted.

The digital revolution is responsible for much of the financial problem and may be part of the solution, but to date the balance has been heavily weighted to the problem side. For example, regarding newspapers in 2011, my Pew colleagues reported that “… losses in print advertising dollars outpaced gains in digital revenue by a factor of roughly 10 to 1, a ratio even worse than in 2010. When circulation and advertising revenue are combined, the newspaper industry has shrunk 43% since 2000.” The same may be true in other news media, or will be soon.

Diminished revenue has led to staffing reductions and cuts in newsgathering operations, including polling. News organizations have reduced their polling budgets, cut staff or even eliminated their polling operations. Not only does this reduce the volume of new survey research being conducted, but it also deprives the remaining news staff of expertise about quantitative data that can help them make sense of surveys conducted by others and make informed judgments about the difference between good data and bad data.

Indeed, the market for good data has arguably eroded. There is evidence that many people don’t or can’t make distinctions between the careful fact-checked reporting of major news organizations and the on-the-fly postings of blogs and opinion sites masquerading as news sites. The competitive pressures in this environment may even lead to an erosion of standards in major media, just to keep up.

Another problem we confront is that financial pressures in higher education have led to the closing of some university-based survey research centers, and significant challenges to those that remain in business. By all accounts, most of the centers are responding well to these challenges, but not all are. As higher education has had to retrench due to declining state support and other pressures, centers have increasingly become dependent on external funding.

The competitive environment among survey research contractors is very intense right now, as those of you in both the private and public sectors know well. But the academic centers have played a special role in our profession, providing leadership on methodological experimentation and educating the next generation of AAPOR members. Anything that threatens their health should be a concern for all of us.

Perhaps paramount among the challenges are those faced by the federal statistical system, including the U.S. Census Bureau. All of us in the survey business, regardless of our political orientation, our sector or the methods we use depend upon the federal statistical system for a data infrastructure of rock-solid national parameters.

Historically, most of the system has been safe from political pressures. There certainly has been political pressure on the use of data from the system, but the production of the data was mostly insulated from such pressure, with the notable exception of the controversy over census undercounts and how they might be adjusted.2

This has changed recently. A little more than a decade ago, politicians began voicing complaints about what they described as the “intrusiveness” of questions in federal surveys, particularly on what was then called the long form of the census. Even George W. Bush, when he was a candidate for president, said he wasn’t sure if he’d answer the long form if he got it.

Then last week, news arrived that the House of Representatives had voted to eliminate the American Community Survey and the Economic Census, after threatening merely to cut the budget and make compliance voluntary. Sponsors of this legislation charge that the ACS is unconstitutional.

Even if the ACS is not killed, deep cuts in the bureau’s budget loom, along with the elimination of the mandatory requirement for the ACS. If these happen, data quality will suffer. Budget cuts also are threatening other projects at Census, and in the Bureau of Labor Statistics and the Bureau of Economic Analysis.

AAPOR and all of the chapters have signed a letter urging Congress to reverse this action when final appropriations legislation is considered, but the threat is grave, even if the ACS is not, in the end, completely eliminated. Everyone in this room knows how important the census data are for our work, a point reiterated last night in the plenary session on non-probability sampling. Without those national parameters, all methods – probability or non-probability alike – will have a much more difficult time judging and eliminating bias in their data.

Opportunity

Where there is threat, there is sometimes opportunity. Given the weaknesses increasingly evident in our traditional way of doing things, we have to find alternatives.

I can’t find anything good in the attacks on the federal statistical service, and the financial pressures on the institutions that have sponsored a lot of survey research don’t appear to have an upside. But the innovations that seem to compete with our approach may not be the foe they appear to be at first sight.

For one thing, they are made possible by a strange paradox. We can’t get very many people to talk with us when we reach out to them for a survey. Maybe concerns about privacy have a lot to do with this. But when they interact with their friends – defined rather loosely – on social media and in other digital places, they appear to be very generous with personal details. Perhaps some of what we want to know from people is available from their interactions with friends and acquaintances that are publicly available. At least that’s what we hope.

The new world of social research relies to a great extent on what Bob Groves called “organic data” in his essay on the three eras of survey research in Public Opinion Quarterly 75th anniversary issue.3 By organic data he means data available from systems such as the internet and social media. From the perspective of the survey researcher, the new world of organic data is just in its infancy. We don’t fully understand what can be extracted from it. I don’t think it can be understood yet.

We do know, however, that never before has so much expression of public opinion — in venues such as Facebook and Twitter, on blogs, digital petitions — been so accessible to researchers for analysis. Never before has so much information about people’s behaviors, everything from their shopping, commuting, traveling, to their internet searches and reading, been available for linking to survey data and to their expression of sentiment in social media.

For many, this is an Orwellian nightmare, where Big Brother is replaced or joined by businesses to keep a constant eye on us. And as Kenneth Prewitt, a former census director, has stressed, the quality of data in databases and in administrative records is questionable.4 At a minimum, it has not been subjected to the kinds of scrutiny and study that characterizes the world of survey research.

But it is undeniably a completely new age for the research world. We have gone from the invention of the probability sample to “big data” in the lifetime of some of the people in this room.

This conference has as its theme the opportunities available to us in this new frontier. A year ago in Phoenix, I met Michael Schober, who along with Fred Conrad wrote Envisioning the Survey Interview of the Future. He pitched the idea of inviting scholars doing cutting edge work outside of our paradigm to come to this year’s conference to address these issues. The stars aligned, because many people, notably conference chair Dan Merkle, were also thinking along these lines. Dan had no trouble selling the AAPOR Executive Council on this conference theme.

I think this is terrific. Even my organization, the Pew Research Center, which has been quite conservative on the methodological front, has been doing and reporting on such methods as computer-assisted coding of blogs, tweets and news organizations’ websites.

Last year, in his presidential address Frank Newport said we should “promote flexibility, not dogmatism, in which methods to use….”5 I’ll see him on “flexibility” and raise him to “enthusiastic experimentation.” My non-scientific conversations with many of you lead me to believe that much of our membership is eager to find out what can be learned with these new approaches. But – as is also typical of the AAPOR approach – we want to judge the quality and applicability of these methods carefully before jumping into the pool. After all, the conference theme is Evaluating New Frontiers.

Three things we can do to keep survey research relevant for democracy

As we think about the threats to survey research and the opportunities available to us, I would urge us to do three things to help keep survey research relevant for democracy.

1. Remember why random samples are important

The first thing I will say will seem obvious to some; it will sound contradictory to others, after my embrace of the new frontier. That is to remind us of the importance of the paradigm that has been the foundation of the work of most of us in the association for most of the time we have been doing it – and that is the production and analysis of what Groves labels “designed data.” We have depended upon it for 75 years.

The linchpin of designed data was the insight of the founders of modern statistics and survey research – the idea that every object in the population would have a known chance of being included, and thus, at least in the final analysis, an equal chance of being included. In surveys, this is highly desirable. In a democracy, this is absolutely essential.

I cannot put this more eloquently than it was said by Sidney Verba, the eminent political scientist who wrote the following passage in his presidential address to the American Political Science Association in 1995:

Verba said: “Surveys produce just what democracy is supposed to produce – equal representation of all citizens. The sample survey is rigorously egalitarian; it is designed so that each citizen has an equal chance to participate and an equal voice when participating.”6

The paradigm of most of the survey work of AAPOR’s members is the probability sample, to use Verba’s words, the “rigorously egalitarian” method that helps to remove, or at least reduce, the biases associated with literacy, education, wealth and other factors that make the voices of some people louder and clearer than others. Verba was making this point to scholars of political behavior, but it applies to all of us in the survey world.

If we edge away from the probability model as we explore the new frontier, we must keep an eye on how well we are representing populations that aren’t as present on the internet, Facebook, Twitter or the commercial and credit databases that can be mined for insights.

In research that my colleagues at the Pew Internet and American Life Project did with Verba and his colleagues, online forms of participation were found to increase the percentage of young people engaged in certain political acts. But the broader takeaway was that the same biases we see in traditional outlets of participation – voting, working for a campaign, communicating with public officials – are still present online. Even though more young people are doing these things, the better educated and more affluent are overrepresented among the activists. As a consequence, substituting analysis of online political participation for survey-based measures will come with a bias toward the better educated and more affluent.

And so I am hoping that we remain mindful about the biases inherent in our new methods. We must not let our tools dictate what we study. We must shape our tools to answer our questions. Bob Groves put it very well, as he usually does, in his Public Opinion Quarterly essay. He says: “The challenge to the survey profession is to discover how to combine designed data with organic data, to produce resources with the most efficient information-to-data ratio.” To which I’d add: “And that don’t privilege the voices and experiences of the wired, the articulate and the highly motivated.”

2. Defend yourselves (and others) in the industry

The second thing I’m recommending is that we all do everything we can to defend high quality survey research, its producers and those who distribute it. We are all accustomed to criticism from those who don’t like our findings, but in the 24-7 speeded-up news cycle, with highly partisan bloggers and news organizations playing a bigger role today, this type of criticism is more prevalent than ever.

Information not only informs policy making, but serves as a political weapon. Perhaps it always has, but I have a sense that bad information, whether it’s junk science, economics or polling data, is now more widespread.

And then there are the familiar attacks on polling that those of us in the opinion research world have been dealing with for decades – that polls are used by politicians to manipulate the public; that horse race journalism is a disease brought on by horse race polling; that public opinion polls turn leaders into followers; that polls create opinions where none exist. All of these have a bit of truth in them, and all have been around since near the beginning of modern polling.

Andrew Kohut, president of the Pew Research Center, spoke to the issue of defending polling in his presidential address to this conference in 1995.7 He looked at the criticisms leveled against polling 50 years prior, and found at least two notable things. First, the criticisms were very familiar. Second, the response of the pioneers of the survey industry – George Gallup, Harry Field, Paul Lazarsfeld and others – was swift and sharp. But he observed that the pollsters of 1995 were not so quick to defend polling and its role in democracy.

At AAPOR we are fighting a mostly defensive war. We try to address allegations of push-polling and other instances of groups using surveys as a cover for what they are really trying to do. We sign letters and protest attacks on survey research generally and on the federal statistical service.

One of the things we are doing pro-actively to address specific criticisms about the role of polls in our democracy is a task force headed by Bob Shapiro and Frank Newport, the “Public Opinion and Leadership Task Force.” Among other things, this task force is dealing with concerns about how credible is public opinion on issues where public knowledge is low and preferences are weak. The task force aims to provide guidance on when polls provide a reliable indicator of public sentiment and when they should be accorded less confidence.

And, as I noted earlier, the federal statistical service is critical to us. The song says you don’t know what you’ve got till it’s gone, but WE know. We must defend its funding and its political independence.

3. Promote transparency in the use of the new methods… and the old ones

My last recommendation is short and familiar: to promote and practice full transparency in the use of the methods on the new frontier – and in the use of the old methods. A cursory review of reporting from most online opt-in panels indicates that transparency is not their strong suit, to put it mildly. For non-survey methods, the record may even be worse. But even in the world of traditional survey methods, we’ve learned from the Transparency Initiative that many of our members – the best intentioned citizens of the survey world – struggle with the details of documenting their methods.

In his presidential address to AAPOR in 2010, Peter Miller rightly pointed out that a lack of transparency in survey research is a grave threat.8 Its absence contributes to ignorance about surveys. It makes fraud more likely. It contributes to the growth of cynicism about surveys. And it allows some to make a false equivalence among sources of data, allowing good survey data to be assigned to the lowest common denominator of information.

A lack of transparency also fails to take advantage of one of the great benefits of the new information environment, which is peer review and pressure. This is the good side of the enhanced scrutiny and criticism we get. Especially for those of us polling in the political world, a sophisticated critic is likely watching what we are doing and will call us out if we make mistakes. Being transparent won’t immunize us, but at least it reduces the chance that our motives will be questioned. And if we are transparent, flaws and limitations may be quickly discovered, so that they can be corrected, and we can do better next time.

Transparency, whether in the old methods or the new ones, makes it a lot easier for us to defend the work of our colleagues. If we know what they did and how they did it, we can defend them from a position of strength.

Conclusion

So while I worry about the challenges we are facing, I am confident that AAPOR and its members are up to meeting those challenges. Our history shows that we are. AAPOR and its members led the field’s response to previous upheavals. Our conferences feature research on the new problems well before answers emerge in the research literature – just think about the cell phone coverage issue. Competitors share knowledge freely, even though doing so may lessen their advantage in the marketplace. And there remains an open-mindedness that makes it possible to have a conference theme about non-survey methods and non-probability sampling approaches, and have a big audience for a plenary featuring a respectful debate on the subject. That’s the AAPOR I know and love.

Thank you for your attention, and I look forward to many more years in AAPOR with all of you.


[1] Examples include Republican protests against the pollster of the

Minneapolis Star-Tribune during the 2004 presidential election, and attacks on the Gallup Poll by MoveOn.org in the same

year.
[2] Kenneth

Prewitt. 2010. The U.S. Decennial Census: Politics and Political Science. Annual Review of Political Science, Vol. 13,

pp. 237-254, 2010.
[3] Robert M. Groves. “Three Eras of Survey Research.” Public Opinion Quarterly 75 (Special

Issue 2011): 861-871.
[4] Stephen

E. Fienberg and Kenneth Prewitt. “Save Your Census.” Nature 466 (26 August 2010): 1043.
[5] Frank

Newport. “Taking AAPOR’s Mission to Heart.” Public

Opinion Quarterly 75 (2010): 593-604.
[6] Sidney

Verba. “The Citizen as Survey Respondent: Sample Surveys and American

Democracy.” American Political Science

Review 90 (March 1996): 1-7.
[7] Andrew Kohut. “Opinion Polls and the Democratic

Process, 1945-1995.” Public Opinion

Quarterly 59 (1995): 463-471.
[8]  Peter V. Miller.

“The Road to Transparency in Survey Research.” Public Opinion Quarterly 74 (2010): 602-606.