Experts are troubled over the number of studies. particularly in biomedicine, that have proven non-reproducible. | Image edited by Robert Beets/AAAS. Flickr/ressaure
Reproducibility — the ability to redo an experiment and get the same results — is a cornerstone of science, but it has been the subject of some troubling news lately. In recent years, researchers have reported that they could not reproduce the results from many studies, including research in oncology, drug-target validation, and sex differences in disease.
In response, journals such as Science have adopted new guidelines for certain types of studies, and members of the scientific community have published a flurry of articles and blog posts. But, as of 2 May, a panel of experts that convened at the AAAS Forum on Science & Technology Policy was still troubled. Fortunately, while their tone was serious, the speakers also described some solutions that are moving forward.
A Far-Reaching Problem
From left, Story Landis, Robert Golub, discussant Mark Frankel of AAAS | Carla Schaffer/AAAS
There are many reasons that scientific results may not be reproducible, explained the speakers, who focused their remarks on the biological sciences. Sloppy research is one possible culprit, according to Story Landis, director of the National Institute for Neurological Disorders and Stroke (NINDS). Studies may be designed poorly or fail to use appropriate statistics, or the experiment's details may be described inadequately in the published report. Researchers may also feel pressure to publish "cartoon biology" that overemphasizes the "exciting, big picture" and leaves out the more prosaic details, she said.
Brian Nosek, co-founder and director of the Center for Open Science agreed that pressure on authors contributes to a "gap between scientific values and scientific practices." The more prestigious journals tend not to publish negative results — that is, studies in which a hypothesis is not borne out by the data — or studies whose chief aim is to replicate other findings. Researchers typically must publish in high-impact journals in order to advance their careers, and therefore have little incentive to conduct these types of studies despite their importance, Nosek said.
Studies that can't be reproduced due to outright fraud are relatively rare, according to Robert Golub, deputy editor of the JAMA, the journal of the American Medical Association. But, even when researchers are not intentionally engaging in misconduct, non-reproducible results are troubling, the speakers agreed.
"There has been a confluence of concern from various sources within the scientific community and from outside the scientific community in the last few years that the scientific enterprise is not producing new knowledge of sufficiently high quality," said Katrina Kelner, editor of Science Translational Medicine and organizer of the Forum session. "…This issue of reproducibility is a problem of increasingly great concern to the scientific community itself and it is, one could argue, legitimately of interest to the broader society because of the robust public support of scientific research."
As an example of the serious consequences that non-reproducible studies can have, Landis cited a report that a drug called minocycline showed promise in mouse models of the neurodegenerative disease ALS. These findings led to a phase III clinical trial funded by the National Institutes of Health, which enrolled over 400 patients between 2003 and 2007. However, the disease actually progressed faster in patients who received the drug than in those who received a placebo. When scientists rescreened minocycline and many other compounds from 221 studies in mouse models of ALS, they found no statistically significant effects.
A Diverse Toolkit
National Institutes of Health is tackling the problem at its early stages, by educating NIH-funded scientists about how to design and execute preclinical studies, according to Landis. They are also developing a one-hour training session and are encouraging other NIH offices to develop more in-depth pilots that could be distributed broadly to universities. And, NINDS has published recommendations for increasing transparency in biomedical research using animal models.
Peer-review is another important stage at which to address reproducibility, said Golub. Datasets from clinical studies can be particularly hard to reproduce, he noted: human patients are complex, the potential to control experiments can be limited, and often for cost-related or ethical reasons, researchers must rely on observational, rather than experimental, evidence. Furthermore, with physicians making decisions based on articles in journals like JAMA, the possibility of publishing invalid findings is "what keeps me awake at night," he said.
Rigorous peer-review can help identify sound science, and journal editors should ensure transparent and objective reporting. However "it's important to mention that there's still a fair amount of trust in the relationship between researchers and editors," Golub said. "We can look for obvious or less obvious statistical errors but there's still a certain amount of faith that the study was conducted in the way the authors said it was."
Yet, biomedical journals could do more, Golub acknowledged, such as encouraging the use of more appropriate statistics and meta-analysis, publishing negative studies more frequently, and taking a stricter approach to transparency with regard to industry involvement and intellectual conflicts of interest.
Brian Nosek | AAAS/Carla Schaffer
Nosek recommended going even further to address science's overall culture, which his non-profit technology startup, the Center for Open Science, does in a number of ways. It offers tools for streamlining scientists' workflow, so that experiments can be better organized, and an archiving system keeps snapshots of a project at various stages along the way. The organization also coordinates large-scale reproducibility projects that researchers from multiple institutions can volunteer to participate in.
For journals, the Center offers tools that could be used to nudge authors in the right direction, such as "badges" for articles based on practices that enable data-sharing. It also facilitates a process by which journal editors can decide whether to accept a paper based on a proposal that authors submit before they collect their data. Thus, if study's hypothesis and methods are found worthy, its authors can be assured of publication regardless of the experiments' outcomes.
The culture around data-sharing is also changing in the private sector. Pharmaceutical companies have historically limited access to their clinical trial data to company researchers, their collaborators, and regulatory agencies like the FDA. But last year, two industry trade groups, the Pharmaceutical Research and Manufacturers of America and the European Federation of Pharmaceutical Industries and Associations, issued a set of joint guidelines for sharing data more broadly with patients, researchers and the public. "We're living in a brave new world where data is a commodity and access to data is an expectation," said Ariella Kelman, group medical director at Genentech.
Ariella Kelman | AAAS/Carla Schaffer
The Roche Group, which owns Genentech, is now making the datasets from its clinical trials — vast quantities of information including the measurements made during every patient visit — available upon request to outside researchers. Kelman outlined several basic principles that underlie Roche's approach to data sharing: respect for patients and the role of regulatory authorities, and commitment to innovation and scientific progress.
"There are some risks with the broader sharing of complex data," she acknowledged, citing the possibility of privacy breaches, frivolous lawsuits, or benefits to competitors. But, the company has taken rigorous measures to de-identify the patient data, and ultimately, she said, the opportunity to optimize patient benefits, gain public trust, and align with industry trends will make this effort worthwhile.
Ultimately, shoring up reproducibility as a cornerstone of science will require a multifaceted solution. "The individual scientist sits in an ecosystem of many different players," including journal publishers, industry providers that offer services for clinical trials, and funders, who each play a role in determining the incentive structure that influences scientists' behavior, Nosek said. "Very little will ultimately get changed unless there's coordination across all" of these groups, he added.