Performance and Accountability: Applying GPRA to Research
To the research community, this is a painful and difficult topic. It is also widely misunderstood.
I want first to separate our view that research activities should comply with the Government Performance and Results Act (GPRA) from the silly notion that doing so undermines our support for research itself. There is absolutely no question that research and public investments in science and technology are important and central to the public agenda. This is a bedrock belief of this President and this Administration, and a commitment of previous Administrations as well. Research matters. We know that.
The President's budgets show this as well. When you look at the federal budget, the piece of the pie that is appropriatedwhich includes researchwas squeezed in the 1980s. Today, notwithstanding the fact that we are balancing the budget and reducing overall spending, we are also gradually increasing the share of research. It was squeezed in the mid-1980s, and in 1994, but we are bringing it back.
In the President's budget proposal for fiscal year 1999, we proposed a very substantial increase in research spending32 percent over the next 5 years. We did so by creating the Research Fund for America. It would fund key programs, not only of the National Institutes of Health (NIH), but also of the Centers for Disease Control and Prevention, the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the National Institute of Standards and Technology, the National Oceanographic and Atmospheric Administration, the U.S. Department of Agriculture (USDA), and others. In creating the fund, we wanted to highlight research and make sure it's adequately supported by a dedicated stream of revenues. We proposed that tobacco receipts be a major part of that stream.
So this is not a discussion about whether research is important. That discussion is over. We think research is extremely important, and that is reflected in our budget proposals. The discussion now is about how we allocate those research dollars.
First, a note of humility. I went to Stanford in the 1960s intending to become a biochemist. I went there because Joshua Lederberg was there and it was an extraordinary department. I knew absolutely, with all of the certainty that 22-year-olds can have, that there would be extraordinary advances in molecular biology and genetics and that they were going to change the face, not only of science and technology, but of the economy and society as a whole. I had a number of friends in another discipline, electrical engineering. The truth is I felt a little sorry for them. After all, what did they have to work on? They could work at Hewlett Packard, they could work on radar, they might work for the defense industry, but it was clear that we knew where the action was. And if it hadn't been for Fairchild Semiconductor and Intel and Apple Computer and Xerox and all the other companies that turned Stanford farmland into Silicon Valley, I'm sure I would have been right.
But I was not alone. I then went to the Kennedy School where I was joined by a colleague who had been one of Murray Gell-Mann's graduate students, very smart, very dedicated, and absolutely serious. I asked him why he had given up the track for a Ph.D. in physics at Cal Tech and come to the Kennedy School. He said, "Well, Josh, to be honest, I think we've kind of plateaued in quantum mechanics. I just don't think there's going to be a whole lot more going on there. So I wanted to move into some other area." He has had a brilliant career over the last twenty-five years, but physicists haven't done too badly either.
I cite these examples to show that even Ia "green eyeshade" type from the Office of Management & Budgetunderstand how hard it is to know in advance what the results of research will be. We have plenty of examples of research bearing fruit in ways that were not expected. And that means that we cannot comply with the Government Performance and Results Act by specifying in advance what those results will be. While we believe it is important for Federal agencies to respond to GPRA, we understand scientists are not going to be able to set as an annual target, that, for example, in FY 1999, NSF will produce the following three Nobel Prize-winning pieces of research. We all understand that it is difficult to predict when, where, and how research will have results. Everybody understands that, from my "green eyeshade" colleagues up to the Vice President and the President of the United States.
We know this makes figuring out how to comply with GPRA challenging, butand this is the hard part of the messageit is necessary. It is necessary, not only because the law says you must think about performance, but because it is especially important to establish credibility for process when the results of that process are indefinite, unclear, and hard to define in advance. Especially in such circumstances, accountability matters. And the Performance and Results Act is about accountability.
Because this is new and it is hard, we implement GPRA by relatively general guidance, and then we customize. What you ought to measure in each instance varies, agency by agency by agency and process by process by process. So we work with each agency individually, and we tie measures to the mission and goals of the agency itself. We ask: "What are you trying to achieve, what's the process by which you are trying to do so, and what measures can we set up?" In most cases, the agency itself chooses the measures. For example, the National Science and Technology Council created a set of principles for R&D. They could have been procrustean, but they are not. One principle says that when you are assessing performance, use a range of measures, both quantitative and qualitative. And when you are thinking about process, pay attention to procedures, but don't overvalue any particular measure.
When you are talking about basic research, these will necessarily be process measures. As a general rule, we think that basic research programs should rely on competition in allocating resources. That means we encourage the process of external peer review. We have a variety of euphemisms for this. We say, for example, that federally funded research "will be of high quality." So, for example, NASA and NSF have committed that at least 80 percent of their external projects will be reviewed by appropriate peers and selected through a merit-based competitive process. This is a process measure, and an entirely appropriate way to implement GPRA.
We also use other measures. For example, we say to agency managers, "What are you trying to do?" NASA came back to us and said, "Part of our job is to provide a linkage between our research and general education." So they have set as part of their performance plan that every major project they do will have an education and outreach program. So the test under the Results Act will be if they thought about the question of the connection of education and research in each of their programs. Did they establish some mechanism to achieve it? We don't specify what that mechanism should be. We don't specify how much it should be. We don't try to make false estimates of how many people it will reach. But we do ask if they reached the goals they set for themselves.
The last measure for basic research that we ask about is the post hoc review. One of the things that NSF did under Neal Lane was to establish through its advisory committees what I call a rolling 3-year post hoc review. Every 3 years the committee will come back and look retroactively at what the results have been. This is terrific.
The next test will be how this is incorporated into budget decisions prospectively. Our test for GPRA is not to specify chapter and verse some "score" by which this gets translated, but rather to ask if the results of post hoc reviews are being incorporated into decision-making and priority-setting within the agency.
More concrete measures are possible with applied research. We work with the agencies and ask them what performance measures make sense. Then we establish them and follow how they implement them. For example, in this post-El Niño period, NASA has set a performance target that they will get a resolution of 25 kilometers over 90 percent of the non-ice-covered surface of the Earth every 2 days. Within USDA, the Agricultural Research Service has set specific targets for particular programs for community quarantines and fly pests.
When you come to the operation of facilities, a different, more concrete set of measures applies. When you operate facilities, it ought to be done efficiently and your performance can be benchmarked. As a result here we have guidelines. When you budget to build or upgrade facilities, our definition of success under the Results Act involves four questions: Is it on schedule? Is it within budget? Is it within 110 percent of estimates? As you operate the facility, are you up at least 90 percent of your scheduled operating time?
There are obvious dangers with the Results Act or, more accurately, obvious risks. It could be procrustean. The measures you use could distort scientific process. For example, if we started measuring an academic department by the number of papers published, you know what the results would be. We know, too, and so we don't do that. We ask for a range of measures, both quantitative and qualitative. We try to avoid what Larry Tribe called "the dwarfing of soft variables" in the process.
Another risk is that goals can be misinterpreted. When you set goals for yourself they can be floors, they can be minimum requirements, or they can be ceilings. Some agencies come to us and say, "We're trying to stretch, and we have set performance goals that are very high. Quite frankly, the odds that we will not meet them are substantial, but we'd rather have that be the objective." We say that's fine. Other agencies come in and say, "Our definition of accountability is setting a goal we will meet so we can certify to you after a year or so that it is met." We say that's fine. We are willing to work with both regimes at this point because these are early days for the GPRA. The FY 1999 budget was the first budget that had GPRA incorporated into it, the first budget that had performance plans.
But it is not the last. This is an iterative process. We are working with each agency to refine its presentation. Presentations will be better next year than they were this year. They will be better the year after that and, like most government processes, they will be considerably more honed and better thought out in a decade than they are today.
We will also work more on developing a sense of interagency judgments about performance. Right now we are working agency by agency. We are trying to establish some notion of accountability and credibility. Over time we will try to think about commensurability, but only then.
We think that is the way to implement the law. We think that GPRA is an advance. We think that it is helpful, in establishing the credibility of government, that there be communication about what agencies are trying to do and how they are going to do it.
And, in the end, this protects the research community itself. It is enormously helpful to the research community if there is credibility in the process by which the federal government supports research.
I say this because the alternative is not better science; it is budgeting by anecdote. I don't know how many times a year NIH Director Harold Varmus gets a legislative proposal for a center for a study of a disease that someone's brother-in-law had (or that some Member of Congress' brother-in-law had). Nor can I say whether NSF's Neal Lane has yet received his first proposals for genetic research on allergic reactions to dog hair or peanut butter, but don't laughit's only a matter of time.
Unless there is real credibility in decisionmaking and resource allocation, the process will be driven as much by politics and history as by science, judgment, and thought. That is why we think GPRA is importantbecause using analysis, thought, and communication is a better way to allocate federal research dollars than history, sexiness, or size.
Joshua Gotbaum is executive associate director of the Office of Management and Budget. This article is based on remarks delivered at the 23rd Annual AAAS Colloquium on Science and Technology Policy, held April 29May 1, 1998, in Washington, DC.