Public science programs serve a variety of functions. Some pursue fundamental discoveries, while others generate usable knowledge intended to support policy decisions, and some are intended to achieve a bit of both. For programs that seek to produce usable science, it can be a challenge to adequately link researchers with end users in the policy realm. Indeed, failure to foster such linkages can lead to program shortcomings and criticism (for instance, see this review of the debate around usable climate science).
In the spirit of this ongoing conversation, a trio of science and policy practitioners has developed a new multi-pronged research typology, to serve as a design and evaluation tool for science managers. Authors include Elizabeth C. McNie, Research Scientist at the Western Water Assessment, part of the University of Colorado-Boulder’s Cooperative Institute for Research in Environmental Sciences (CIRES); Adam Parris, Executive Director of the Science and Resilience Institute at Jamaica Bay and formerly a scientist and division chief at the National Oceanic and Atmospheric Administration (NOAA); and Dan Sarewitz, Co-Director of the Consortium for Science, Policy, and Outcomes (CSPO) at Arizona State University.
AAAS recently spoke with McNie and Parris to learn more. The typology itself is shown below; see the CSPO website for the full project report (or download a 2-page summary).The below conversation has been edited for length and clarity.
AAAS: So what is this typology about and how does it work?
McNie: The typology is a tool that can help inform the deliberation about, and design and implementation of, research. It divides research into three general activities, and into additional attributes for each activity. It asks specific questions for each attribute to help guide the user into understanding where their particular research project or program lies on a spectrum, between what we call “science values” on the one end and “user values” on the other. “Science values” prioritize the generation of knowledge for its own sake…whereas “user values” look at the role of producing useful information for decision-making, to try to find users’ needs. So the typology can take any kind of research project or program and, by following the questions for each attribute, can help the user determine where their research project lies on the spectrum and in turn understand how that project may be improved or redesigned or implemented to best suit the value demands at play.
Parris: It’s a really exciting method to be able to make smarter investments in science, that really balance between our expectations for advancing theory and fundamental knowledge and our expectations of using science to address problems we know people are facing out there…It’s a way of looking under the hood, designing better incentives and mechanisms and institutions for science that can really tap its potential even more than we already have been.
"If and when the time comes where you want science to inform decision-making, there has to be a connection between the researchers and the users."
Dr. Elizabeth C. McNie, Western Water Assessment
Science is done for a whole bunch of reasons. You can explore fundamental theories about how the world works, you can try to resolve what an uncertain future looks like, or you can address a very specific problem about bacteria in the water that you drink. And I think what we’ve learned through science and technology studies and through practicing and exploring different methods of science is that different institutional environments actually suit those scientific pursuits better or worse. So what the typology does is to create a structured process to try to assess the fit between science that’s meant to address problems in the real world and science that is meant to advance our understanding as a body of knowledge.
Regarding research intended for a defined use, how much of a problem is it that research performers are disconnected from potential users?
McNie: If and when the time comes where you want science to inform decision-making, there has to be a connection between the researchers and the users. The problem of trying to connect research to decision-making, I would say, is still pretty widespread in the mission agencies. And part of that is because of our overreliance on “basic” and “applied” research [as categories] … Too often with applied research the kind of metrics that we use for evaluating the research are still oriented towards “science values,” so that’s the number of peer-reviewed publications, the impact factor, the number of citations…Even if mission agencies support what we call “applied research,” there’s still no explicit connection between the researcher and the user, whereas if you’re talking about “use-inspired research,” the connection between researchers and users is explicit. That has ramifications in terms of how the research is designed and implemented. And so part of the use of the typology is to help us understand where opportunities exist to better link research and use.
Parris
McNie
Parris: It’s not a problem if the science isn’t meant to be addressing user needs or societal issues. I mean if your explicit purpose is to ponder, design or pursue science that is theoretical in nature or trying to advance theories, advance knowledge, then it might be perfectly fine not to interact with the folks who might want to use your stuff or might find value in solving real-world problems.
I think those terms “basic” and “applied” are views of how science should be. They’re almost philosophies of what science should be. And ironically, as science has become more diversified, people have pursued science in different ways, it gets harder to apply those two terms to all the different kinds of science that are happening across the U.S. That’s where the need for something like the typology emerges, because what we want to do is really learn something about how different scientific approaches, ones that involve users and ones that don’t, actually solve problems and inform decisions over time, and then feed that back into our institutions.
Let’s say I manage a federal science program. Walk me through how I might incorporate this tool, and what value I might get out of it.
Parris: I used to be that hypothetical program manager…When I was at NOAA, I was managing a program where different research consortia were located in different regions of the U.S., and all of them were meant to be pursuing the overarching goal of helping people who are seeking to adapt to climate risk. That sounds like a good unifying theme, but when you take that goal and put it into these different regions that have really different cultural, institutional, climatological settings, how those different projects within the individual eleven programs plays out gets really complex. For example, over the course of a five-year program in New York, an event like [Hurricane] Sandy happens, the program might have to change course from what they were doing on issues like urban heat and its effect on public health to look at coastal flooding…Conversely then, you move to a setting like Alaska where those kinds of extreme events over that same period of five years may not have happened, and it’s really remote, and they have to pursue those efforts differently. The typology is a really useful tool to compare the two different programs in two different settings over two different periods of time and see how the different approaches help to inform decisions related to weather and climate risks.
Switching gears, in my current role, I’m managing a consortium of nine different universities. There are a lot of different researchers and they all have different preferences for where we fall on that spectrum from “science values” to “user values.” And in designing this program and balancing out the extent to which we’re looking at fundamental problems or user-oriented problems, what I want to do is use the typology as a survey instrument to understand people’s expectations and their own views, so that I’m not creating conflict by having a mismatch between expectations.
The typology didn’t exist [when I was at NOAA], it’s something I wish would have existed. I also want to be clear that I wouldn’t have necessarily used it to evaluate those different programs, but more to get a sense of how comparing the different approaches and understanding some of the factors that were constraining those different teams of scientists in being able to achieve the goal of the program.
"As people have pursued science in different ways, it gets harder to apply those terms like 'basic' and 'applied' to all the different kinds of science that are happening across the U.S."
Adam Parris, Science and Resilience Institute at Jamaica Bay
McNie: For example, if you are interested in developing fundamental knowledge about quasars, you would be most interested in epistemic expertise and exploratory goals for research. You’re most interested in reducing uncertainty. Whereas if you’re more interested in developing information based on users’ needs, and that can be applied to an actual problem, you would be more interested in contextual information. You would probably not be interested in reducing uncertainty but rather in managing uncertainty. The kind of expertise that you’d be interested in would be epistemic but would also be experiential. Based on how you evaluate it using the typology, you can then make adjustments in terms of how you allocate human resources and financial resources, or other resources designed to serve your research program.
What kind of a reception have you gotten from different science and policy communities? Are managers and decision-makers putting this to work?
McNie: The initial feedback from agency folks was pretty positive. Those who are interested and see the value of trying to reshape research to be more productive gave us some good feedback on the two occasions where we presented the typology. We’ve also gotten some good reception from people in the research community who are interested in using it. I’m currently using it with a research project…We’re already beginning to implement it, and so we’ll be using the typology not only in the initial stages of the research to help inform research design, but also periodically through the two-year project to help maintain the research trajectory according to the desired outcomes. And then it’ll also be used at the end as a tool for evaluation.
Parris: It’s facilitated some really interesting discussions comparing different approaches, comparing different projects, and trying to talk about lessons learned for pursuing science that informs decisions. And I think part of that is because previously, a lot of those discussions can get wrapped up in characterizing or putting a name to that kind of science. And now what we’re saying is the name isn’t so important.