Deeply rooted assumptions creep into decision-making in unrecognized ways—even among the most well-intentioned peer-reviewers, journal editors, and science funders—and that can prevent the best science from being sponsored or published, experts said at a recent AAAS forum on implicit bias.
To demonstrate such ingrained assumptions, social psychologist Brian Nosek put forum participants to the test. First, he asked them to shout out whether different words were “male” or female,” and he clocked their answers. Beginning with gender-specific pronouns such as “he” and “she,” the Implicit Association test seemed easy at first. It was not much more difficult when the categories were “career or male” and “family or female.” When the categories became “career or female” and “family or male,” however, response times lagged. Conflicting answers and embarrassed laughter followed.
The order of word-pairings affected the test results only slightly, said Nosek, a professor at the University of Virginia and executive director of the Center for Open Science. Our brains’ automatic, continuous efforts to make sense of the world, combined with life experiences, are the key drivers of implicit bias, he added: “We didn’t evolve to be fair—we evolved to survive and thrive,” he said, but “if we can be more humble about the biases that exist in us that are counter to our values, then we open up the possibility for external strategies to help us uphold our values while making decisions.”
Unconscious assumptions about gender, ethnicity, disabilities, nationality, and institutions clearly limit the science and technology talent pool and undermine scientific innovation, said AAAS Board Chair Geraldine Richmond. As an early-career faculty member in the 1980s, Richmond—now presidential chair in science and professor of chemistry at the University of Oregon—recalled how a fear of gender bias prompted her to use initials, rather than her full name, on her first major journal article. She convened the 28 April forum at AAAS to identify next steps toward minimizing implicit bias in peer review.
The problem of implicit bias is not only about fairness, said speaker Jo Handelsman, associate director for science in the White House Office of Science and Technology Policy. Publishing opportunities and research grants are “gateways to success,” she said, and certain science and engineering fields need more candidates from different backgrounds to pass through those gateways. At the same time, she added, “Diverse groups are more productive, more creative, and generate more innovation.”
The AAAS forum featured presentations by journal editors, federal funders, and researchers. Editors cited a U.S.-centric bias as a major problem in peer review. Edward Campion, of the New England Journal of Medicine noted, for instance, that countries with fewer resources disproportionately suffer “diseases of poverty,” yet those countries are poorly represented among reviewers, and therefore risk receiving less attention than they deserve. Similarly, at the American Chemical Society (ACS), a large portion of submissions in 2015 came from China and other countries in Asia, but those authors remain somewhat underrepresented in terms of published output, said Heather L. Tierney, managing editor, ACS Publications.
Journal data presented at the forum seemed to suggest that publishers may be somewhat further along in addressing gender bias, although speakers described a need for more female editors and peer-reviewers. Brooks Hanson, director of publications at the American Geophysical Union (AGU), said that, at his organization, papers with women listed as the first author now have a higher acceptance rate than those with men as first authors. However, he added, “Significantly more editors and reviewers are male compared to the distribution of AGU members and accepted first authors, and this may have important effects on career development.” Recruiting more women to the ranks of elite journals can prove challenging, given the many career and family demands on women in science, said Simine Vazire, editor-in-chief of Social Psychological and Personality Science. Some 40% of the journal’s associate editors are women, said Vazire, an associate professor at the University of California, Davis, but she added, “To get six men to agree to serve as editors, I invited seven. To get four women to agree, I invited twenty-one.”
To help eliminate bias, Vazire recommended double-blind peer review, in which authors and peer-reviewers are unaware of each other’s identity, or even triple-blind review, which also prevents editors from seeing authors’ names. Sowmya Swaminathan, head of editorial policy at Nature, said that acceptance of double-blind review may vary across scientific fields as well as geographic regions. In a 2012 experiment, only about one-fifth of monthly submissions to a Nature journal undertook the double-blind option, Swaminathan said, although three-fourths of surveyed readers were “supportive” of the option. A double-blind option was introduced across all primary Nature research journals in 2015. The greatest use of the option was seen among submissions from China and the United States, and in climate science, Swaminathan reported.
Other speakers at the forum described funding-agency efforts to address concerns raised by a 2015 report of the U.S. Government Accountability Office (GAO), which called for better data and information-sharing related to female researchers. That report, Women in STEM Research, identified “no disparities in success rates between women and men” applying for research grants at three federal agencies, but insufficient data at three other agencies. Suzanne C. Iacono, head of the Office of Integrative Activities at the National Science Foundation (NSF), reported a “slight rise in awards to women” between 2001 and 2014, but she noted that only one-quarter of all proposals to the agency were submitted by women scientists. In 2014, 23% of all proposals were funded, she said, while the success rate for women was 24%, yet men submitted a little more than 31,000 proposals, compared with about 11,000 from women.
For African-American researchers, the situation is more troubling, Iacono reported: The success rate for African-American submitters is about 18%, but such applicants represent only about 2% of the NSF’s total submissions. Sonny Ramaswamy, director of the National Institute of Food and Agriculture, and Richard Nakamura, director of the Center for Scientific Review at the National Institutes of Health (NIH), expressed similar concerns about grant applications from African-American scientists. At the NIH, African-American researchers “receive awards at “55% to 60% the rate of white applicants,” Nakamura said. “That’s a huge disparity that we have not yet been able to seriously budge,” despite special mentoring and networking programs, as well as an effort to boost the number of scientists from underrepresented minorities who evaluate proposals.
Studies of the peer-review process have shown that African-Americans and women “are held to higher standards to be judged competent,” said Molly Carnes, a professor of medicine, psychiatry, and industrial & systems engineering at the University of Wisconsin-Madison. Training can help to reduce implicit bias, Nosek said, but the positive impacts of such interventions tend to be short-lived. Moreover, Carnes noted, making reviewers aware of the neurological roots of implicit bias can backfire, causing some to believe that there is no way to avoid it. Nosek recommended structuring external processes to help minimize bias, while also encouraging reviewers to accept and become more mindful of the problem. To recruit and retain more diverse panelists, NSF’s Iacono said that giving reviewers the choice to remain at home or travel to NSF has increased the rate at which women participate in review panels. Carnes and colleagues are meanwhile studying how bias might affect NIH research project grants called RO1s. Interaction among reviewers, dubbed “Score Calibration Talk,” may set up a potential for bias, Carnes said.
Shirley Malcom, director of Education and Human Resources programs at AAAS, noted that data on implicit bias remains incomplete, and it tends to focus more on gender than on ethnicity. Data on implicit biases that favor elite institutions over others are also in short supply, she said, also adding “we look not at all at persons with disabilities.”
In breakout groups, forum participants identified a need for more uniform data-collection and data-sharing as critical next steps toward minimizing implicit bias in peer review. Marcia McNutt, editor-in-chief of the Science family of journals, said that technologies such as PRE, the Peer Review Evaluation service (which is owned by AAAS, publisher of Science), might be able to help publishers, by providing bias training for reviewers. Eric Hall, PRE’s product director, said this is in the works. “PRE has in its roadmap a plan to help address bias in peer review, as part of online training modules for reviewers. We look forward to working with like-minded organizations to develop a curriculum that will be applicable to every discipline.”
U.S. Representatives Eddie Bernice Johnson (D-Texas), Rosa DeLauro (D-Connecticut), Louise Slaughter (D-New York), and Jackie Speier (D-California) served as honorary co-chairs of the forum. Speier, who made an appearance at the event, commended AAAS for confronting the problem. “Bias in peer review rots the scientific enterprise from within,” she said. “We all need the best, the most creative ideas to rise to the top.”
This article originally appeared in AAAS News & Notes in the 27 May 2016 edition of Science.