The 2009 American Recovery and Reinvestment Act, commonly known as the stimulus act, provided $21.5 billion for federal research and development. Among the act’s benefits to society, more than 400,000 jobs for one year were expected to be created or saved. Now policy makers are investigating the impact that those funds have made on creating jobs, expanding scientific knowledge, and spurring the economy.
Speakers at the annual AAAS Science and Technology Policy Forum discussed methods they’re developing to measure the societal impact of the federal spending on research and development, including stimulus funds and the yearly federal budget. They described how new factors—such as citizen participation in policy-making and federal risk-taking to promote innovation—could be infused with more traditional approaches to measuring how federal investments in science and technology payoff for society.
“We have some methodologies now after many years of work in the field,” said Al Teich, director of the AAAS’s Science and Policy Programs. Teich moderated the session on societal impacts of science and technology. “We have data and experience, and hopefully we can learn from that experience.”
The 35th annual Forum was held 13-14 May in the Ronald Reagan Building and International Trade Center in downtown Washington, D.C. More than 500 scientists, policymakers, journalists and others interested in science and technology policy attended this year’s Forum, which featured sessions on the federal research and development budget, climate change policies, cyber security, and other higher-priority national and international issues.
During a 13 May session on societal impacts of science and technology, Stefano Bertuzzi, of the Office of Science Policy Analysis at the National Institutes of Health (NIH), described how he’s developing a new data system—called STAR METRICS—that can be used by federal agencies and universities to assess the impact of federal investments in science and describe their outcomes in response to state, congressional, and executive branch requests.
“The idea is to create a partnership between the federal government and the universities to document the outcomes of science,” said Bertuzzi. He’s leading the NIH part of the effort, which is jointly sponsored by the National Science Foundation and the White House Office of Science and Technology Policy.
The STAR METRICS program will provide infrastructure to gather data directly from universities’ administrative records to assess job creation due to stimulus funds. Bertuzzi emphasized that the program would impose no administrative burden on scientists.
“We ought to be able to explain—not just anecdotally—what have we done and what we have invested,” Bertuzzi told the AAAS audience. “When you enjoy budgets the size of the NIH, for example, which is over $30 billion, telling a good story comes a little short of explaining why we should continue to do this. There is a lot of room for growth in this.”
While the No. 1 priority of stimulus funds is job creation, it’s not the only goal, Bertuzzi said. STAR METRICS is not limited to measuring job creation. “If we portray science investments as just a job-creating endeavor, we lose,” Bertuzzi said. The funds are also intended to spark innovation, such as by training scientists and by generating new companies and ideas. STAR METRICS will also be used to measure economic growth through patents and university collaborations with local businesses and the growth of scientific knowledge through publication of scientific results and citation of those publications in other materials.
Timothy Persons, chief scientist in the U.S. Government Accountability Office, described the office’s efforts to provide Congressional committees with timely evaluations of the use of public funds for science and technology programs. The office’s reports help inform funding decisions. “We are objective, fact-based, nonpartisan, and non-ideological in terms of our approach,” Persons said.
Persons and his colleagues have published online reports on various science and technology issues, such as the use of biometrics—including facial, fingerprint, and retina recognition—in security; strategies to cope with wildfires; and approaches the federal government can take to bolster cybersecurity.
“We try not to do technology assessment just for technology’s sake,” he said. “It’s usually technology in the context of some policy or a current challenge.”
Persons’ office is expanding its science and technology staff to fulfill Congressional requests for reports and testimonies on science and technology. They’re hiring more doctorate-level science and technology experts, and Persons plans to employ short-term workers such as the AAAS Science and Technology Policy Fellows and experts from the National Academies who would do a rotation through the Government Accountability Office.
The reports issued by Persons’ office are known as technology assessments, and they highlight and reveal the trade-offs in public policy issues in which the U.S. government currently has an interest or is likely to have an interest in the future. “Our program is designed to be balanced and objective in terms of federal programs and public policy issues,” Persons said.
Now, his office is working on a geoengineering report, which it plans to release in November. Geoengineering has been proposed as a strategy to off-set the negative effects of climate change by using large-scale interventions, such as reflecting sunlight back into space and removing carbon dioxide.
Persons and his colleagues are taking a different approach to this report: they’re putting the report into a social context. The report will discuss “public awareness, behaviors, anxieties or choices of a technology such as climate intervention,” Persons said. “On one hand, it’s important in a technology assessment to talk to the scientists and experts. It’s another thing to figure out how would the school teacher or the nurse or the fireman think about the U.S. government pursing policies that may actually involve intervening with the climate,” he said.
Speaker Todd La Porte agreed that public input is important in policy-making. La Porte is an associate professor in the School of Public Policy at George Mason University. Before coming to the university, La Porte was an analyst in the now defunct Office of Technology Assessment, which provided research for the U.S. Congress. Although the Office of Technology Assessment was disbanded in 1995, there continues to be a need for a public institution to evaluate complex and complexly integrated science and technology issues, La Porte said.
Todd La Porte
Such an institution can deal with complexity and can “husband knowledge, can transmit it from generation to generation, can devote resources to solve problems that aren’t obvious and that take sustained effort to understand,” La Porte said. “Institutions have memory—they can learn and reproduce their agents and can adapt to new conditions if those conditions don’t change too fast.” La Porte said that it’s possible that the Office of Technology Assessment failed because it could not keep up with the changing political environment.
Since the office was disbanded in 15 years ago, conflict has emerged over what is the “right” model for technology assessment. “The claim here is that there is a need for a 21st century technology assessment model, after the collapse of the OTA and its 20th century way of doing things,” La Porte said. He said that the classic model of doing technology assessments—the one used by the Office of Technology Assessment—had experts frame the research question in consultation with Congress, prepared a study, consulted other experts, analyzed policy options, and then presented the findings to Congress and the public.
But some criticized that classic model for not including the public, said La Porte, citing the Danish Board of Technology as an example of citizen-focused technology assessment. In this model, citizen panels meet to conduct studies, which reportedly cost less, take less time, and are more broadly representative of citizens’ concerns compared to the classic model of technology assessment.
Public participation in science and technology policy-making is a trend in topics like coastal management, environmental justice, and emerging technology. The public can help provide ethical, political, and social implications for science and technology policy—and such implications have been missing from traditional technology assessments, La Porte said.
In the United States, a recently proposed approach to public engagement in technology assessment is Expert & Citizen Assessment of Science and Technology (ECAST), a virtual network of activities conducted by nonprofits and universities to try to create the kind of alternative to the centralized or expert-oriented technology assessment organization like the OTA.
La Porte argued that the nation needs citizen input, but also a trusted institution to help Congress make sense of technical and administrative aspects of policy. “We need public engagement. We need institutions. We need to reestablish trust in our institutions,” La Porte said. “And we need to provide a strong base for science and technology and the professionals in those fields to communicate adequately to policymakers.”
Rachelle Hollander emphasized how the federal research and development budget reflects national values, such as security and competitiveness. Hollander is the director of the Center for Engineering, Ethics and Society at the National Academy of Engineering. At the AAAS Forum on Science and Technology Policy, she put science and technology research priorities into context with society influences, acknowledging that science, technology and society develop, evolve and depend on each other.
“Social responsibility requires conditions in which research and results promote human well-being,” Hollander said. Using agriculture research as an example, she discussed how the public is not always the first priority in policy decisions. In agriculture research, “for a long time one of the top priorities was productivity: the amount of crop per unit of space,” she said. Trade-offs and concerns from the public weren’t really weighed into the decision to make productivity a research priority.
Rights of the community to have clean water or to avoid other negative consequences of the push for high-yield crops might have taken second or third seat, Hollander said. “We have to recognize that sometimes the negative parts are inextricable from the positive,” she said.
Jamie Link, a policy analyst at the Science and Technology Policy Institute, suggested that government act as an entrepreneur when it tries new approaches— even those that involve uncertainty—to spur technology developments. The Institute is a federally funded research development center that provides analytic support to the White House Office of Science and Technology Policy and federal agencies.
As an example of the government acting as an entrepreneur, Link described the federally funded Small Business Innovation Research Program. The program provides funds for research in small firms, hopefully resulting in a marketable product. In this context, Link said, the government is acting as an entrepreneur, by creating an environment to facilitate the commercialization of funded research, even though it’s not certain whether the businesses they are investing in will commercialize technology.
Some degree of failure to meet program objectives is to be expected when uncertainty is associated with policy actions, she said, but we can still learn something by studying these failures.
The speakers participated in the AAAS Science and Technology Policy Forum’s “Major Issues in Science and Technology Policy” session, which also included sessions on new approaches to international science and technology engagement and climate change issues.
See more news from the 2010 AAAS Science & Technology Policy Forum.
Get details about the program and speakers at this year’s Forum.