Performance of R&D, Accountability of R&D, and The Government Performance and Results Act
Andrew J. Vogelsang
I want to update the Government Performance and Results Act (GPRA) and the developments to date. I will concentrate on some of the politics and the Hill shenanigans that are going on. It's quite an interesting process, from GAO's point of view.
The General Accounting Office (GAO) is the investigative arm of Congress; it is sometimes referred to as the Watch Dog Agency. We have the statutory responsibility to investigate wherever federal funds are spent, with few exceptions (namely, the top-secret CIA activities and programs like that).
With respect to the Results Act, one description of GAO's activities is particularly relevant. When I first came to GAO, one of my supervisors explained to me that what GAO does is something like going to the battlefield after the battle's been fought and shooting all of the wounded.
The Government Performance and Results Act essentially mandates performance-based management. It was enacted in 1993 by the Democrats when they were in control of Congress. GPRA was passed unanimously. That is something to remember as this process unfolds. It was passed when the Democrats had a majority and the Republicans embraced it when they came to power. (The Democrats typically call it GPRA and the Republicans call it the Results Act.) There are three essential requirements:
One important point to remember with the annual reports is that the agencies can go back to Congress and say they would like to change these goals because they don't think they are particularly relevant or they are not achievable.
There is an overall emphasis in the Results Act on measurable goals and metrics, which is the source of a lot of consternation among the science agencies. I pulled something out of the statute to give an example of how much emphasis is on these measurable goals.
"Most annual performance goals define an objective, quantifiable, and measurable target level of performance for each program activity." The consternation among the science agencies arose primarily because, if you are working at science, you are trying to measure the unmeasurable. Often the true outputs or outcomes of R&D projects cannot be measured, although they can still be very important.
The science agencies were afraid that they would be at a disadvantage when they compete for federal funds against agencies like Social Security that might have more easily measured results.
However, there was a savior in the statutory requirements: the alternative form. Any agency can describe a minimally effective program as successful as long as the Office of Management and Budget (OMB) approves it. Initially, when the agencies were struggling with their strategic plans and their annual performance plans, no one knew if OMB was going to allow this or how it would be received by Congress. Science agencies were hesitant on whether or not they should try to invoke this alternative form. My sense was that on the Hill people wanted to see a legitimate effort made at measuring the outputs of the R&D programs before an agency resorted to this measure.
There is another part of the alternative form where certain agencies (or at least some programs of those agencies) don't have to come up with performance goals if it's impractical or not feasible. That also has to be approved by OMB; I don't know if they ever have.
What happened as far as process goes on the Hill? When the agencies came out with their strategic plans, they submitted drafts to Congress because it was mandated to be a very interactive process. Congress would then send them to GAO for review. GAO published guidelines about what should be in the plans and how we would review them. It was somewhat akin to giving the questions on a test before the test is taken, but not the answers. In this case that's not particularly helpful.
After GAO briefed the interested congressional teams on the Hill, congressional staff would go back to their offices and score them with a score card. Areas on the score cards centered on whether they had the required parts of the strategic plan. They didn't look at whether the metrics were appropriate or if they reflected real progress in the programs. It's just, "Is it there or is it not, can we understand it or can't we?"
They would rate the plans from 0 to 100 in, I think, 5 areas. When the first round of draft plans was released they had a big press conference on the Hill. Every plan received a failing grade. But that was just on the drafts.
After that, the final strategic plans were submitted and they went through the same process again. This time two plans passed: the Department of Education and the Department of Transportation. Both of them got scores in the 70s. That was a great press release opportunity, with a lot of coverage in The Washington Post and elsewhere.
After this whole process, GAO issued a capping report. We summarized all of the plans and our view of them overall, and put summaries and appendices in the back of each report. GAO came up with this bold statement on the plans in general: "On the whole, agencies' plans appear to provide a workable foundation for Congress to use." Now that may not sound like a bold statement. But that was not the view on Capitol Hill by a lot of Members of Congress and committees. In fact, this broke with some of the commonly held views on the Hill. The House Government Operations Committee, I think, introduced a bill that would mandate that all agencies go back to square one and go through the whole process again. The House passed that bill, although it's unclear what will happen in the Senate.
But GAO's internal view (this is not an official view, but I think it was the general feeling throughout GAO) was that we should go through this whole process at least once and get some experience before we started going back and trying it all over again.
Once again, when the annual performance plans came out, Congress went through the same process. They gave the report or the plans to GAO, GAO came back and briefed the teams of interested staff, and then Congress scored them again. Once again, a big press conference was held. Representative Dick Armey (RTexas) led it. In fact, he is leading the whole charge on Capitol Hill regarding the Results Act. Once again, the agencies did not do well, but this time right away someone passed, and that was the Department of Transportation again. (I don't know what they know that no one else knows.)
But GAO issued guidance and, once again, provided the questions that we would be looking at when we reviewed the plans. These were followed closely by Congress.
In fact, the House leadership, the speaker, the chairman of the Appropriations Committee, the majority leader, and two other chairmen of committees endorsed the plans.
With regard to these annual performance plans, GAO has not released the report yet, but it should be released soon. I don't know exactly when this will happen.
There are two important observations I can make that are directly related to the concerns of the science agencies:
I think both of these developments are important to the science
Andrew J. Vogelsang is senior evaluator, for Energy, Resources, and Scientific Issues, General Accounting Office. This article is based on remarks delivered at the 23rd Annual AAAS Colloquium on Science and Technology Policy, held April 29May 1, 1998, in Washington, DC.