High
profile reports of bacteria that incorporated arsenic instead of phosphorus and
of a particle traveling faster than the speed of light have not fully stood up
to public scrutiny. The inability of other scientists to replicate these widely
publicized experiments has brought increased attention to the issue of the
reproducibility of scientific experiments.
Adding fuel to the fire, Amgen scientists reported they could not repeat 89% of the published findings on
promising targets for cancer therapeutics they investigated. These events have
led to outrage that public dollars are being spent on such poor research. There have been a number of proposals for ways
the scientific community can maximize the reproducibility of published results.
These include additional ethics training for students and young investigatiors
and standardization of guidelines for publication. Unfortunately,
too often the lack of reproducibility gets conflated with the more
serious issues of carelessness and fraud. While both poor science and (in rare
cases) outright fraud contribute to the publication of work that cannot be
reproduced, there are other issues to consider.
It is important to be clear that the failure of one lab to replicate the work of another is not the same as proving the original work to be false. Many bench scientists struggle to reproduce results, both published and within their own labs. When dialogue is open between the two scientists performing the experiments, it is usually easy to see where miscommunication or lack of detail in a protocol has led to a different result. In my opinion, a vital step in reducing issues with reproducibility is to encourage the publication of detailed protocols. Far too often, Materials and Methods sections are short and among the first areas to be cut when conforming to a journal’s word limit. Instead, we should expect each published article to clarify important details including the temperature at which experiments were performed, the concentration of all reactants and the equipment used for each step of a procedure. Only when replicate experiments have been performed precisely under the same conditions should the original be regarded with skepticism.
It is important to be clear that the failure of one lab to replicate the work of another is not the same as proving the original work to be false. Many bench scientists struggle to reproduce results, both published and within their own labs. When dialogue is open between the two scientists performing the experiments, it is usually easy to see where miscommunication or lack of detail in a protocol has led to a different result. In my opinion, a vital step in reducing issues with reproducibility is to encourage the publication of detailed protocols. Far too often, Materials and Methods sections are short and among the first areas to be cut when conforming to a journal’s word limit. Instead, we should expect each published article to clarify important details including the temperature at which experiments were performed, the concentration of all reactants and the equipment used for each step of a procedure. Only when replicate experiments have been performed precisely under the same conditions should the original be regarded with skepticism.
New and interesting ways of
providing detailed experimental procedures have proliferated in recent years
with the publication of Nature Protocols and JoVE, two repositories for highly
detailed methods. Providing thorough explanation of techniques and procedures
will become common practice if high profile labs lead the way by sharing their novel
methods. The NIH can encourage the use of these repositories by making
procedural transparency a component of the score that determines whether a
grant is funded or not.
There are creative attempts going
on to identify results that are irreproducible (or conversely, identify
reproducible results) to minimize time and effort spent in other labs following
up on poor data. The blog Retraction Watch wants to make sure the scientific
community is aware of papers that have been withdrawn or retracted. While this
project less directly aids in improving reproducibility, it helps with the
larger goal of preventing the waste of time spent by researchers trying to
replicate false or incomplete experiments. The authors of the blog note in their
FAQ section that there is no comprehensive database of retractions from
scientific journals. While the retraction of an article may be noted in the
place of the original manuscript on the publisher’s website, little publicity
is given to these notices. Indeed, it is hardly in the publisher’s best
interest to do so.
A particularly bold group has
approached this problem by founding the Reproducibility Initiative, a new
resource to help scientists by adding to the impact of experiments that have
been reproduced. For a fee, the group will match interested investigators to
researchers who will then attempt to repeat their experiments. As part of the
service, the investigator has the option to publish the results of validating
experiments in an online journal (the publication is optional so investigators
may choose not to publish experiments that conflict with their original
results). Validation of the initial results qualifies the original manuscript--if
published in participating journals--for recognition of that reproduction. Presumably, this validation adds to the
impacts of the results.
How well
this initiative succeeds will depend entirely on the quality of the scientists
performing the follow-up experiments and the ability to communicate between the
original and replicating labs. If the follow-up lab is able to quickly
replicate a result, the situation will be beneficial to all. However, if the
follow-up lab cannot replicate the published result, the amount of benefit will
depend on the two labs working together to determine why the replication
failed. An inexplicable discrepancy helps no one, for the reasons discussed
above. As a scientist, I would certainly be glad to know others could reproduce
my results, but if they failed to do so, I would not necessarily trust their
results over my own due to my greater knowledge of my own methods. Inexplicable
discrepancies could lead to potentially time consuming and costly searches to reconcile
the results of the two labs. This may prevent scientists from using the service
if they have even one bad experience.
How do you
think the scientific community can improve the reproducibility of publically
funded research? Leave your ideas in the comments!
Irene Reynolds Tebbs
6th year, Molecular Biophysics and Biochemistry
Irene Reynolds Tebbs
6th year, Molecular Biophysics and Biochemistry
No comments:
Post a Comment