Re: Publication Bias and the Need to Publish Negative Data?

Recently, Alex posted a response to one of my previous blog posts in which I noted Ben Goldacre’s TED talk on publication bias and its implications for science. However, unlike my original claim, Alex doesn’t believe that “publishing negative data is necessary in basic science research”. He says that it may not be very time-effective for scientists to publish negative results because “other researchers likely trust their own hands more than those of people they don’t know” and thus “they would try the experiment themselves anyway”. His second point is that “if they can’t replicate the result or cannot build from it, then that result is likely not true”. To me, this appears to be his underlying argument that “publishing negative results is important for clinical trials, but not needed for basic science research”. However, although his argument makes sense to me, I think it overlooks a few things that may be important to consider.

To start, I’d like to address the point that researchers are not likely to look up failed experiments because they’re likely to do their own experiments anyways. First, at least from what I’ve experienced, the first step researchers usually take in designing a plan to tackle their research question is to see what has already been done (both to see if this question has already been asked, and also to see what has already been asked from which this experiment can move forward). When negative results aren’t published, it makes it difficult to establish if an experiment is worth the time and effort of the researcher. Although the negative results aren’t necessary for basic science research, I think they are important for the efficiency of science. Second, I believe it is becoming more difficult to justify the allocation of resources towards experiments that are simply (un-)verifying the findings of other researchers instead of towards experiments that will produce publishable results. This is simply due to the increasing lack of funding being allocated to science. Even at Harvard, the effects of funding cuts are being felt (in fact, Alex is pictured in this recent article about the effects of funding cuts on Boston). With this financial pressure on researchers, I believe that their focus will be especially tuned on how to more effectively produce results efficiently using the funding that they have.

On Alex’s second point, that results are not likely true if results cannot be replicated or future experiments cannot be built from them, I’d like to look at a few examples from the past to address some of my concerns with this claim.

First, as an examination of human nature, I’d like to examine a case involving Nobel Laureate Robert Andrews Millikan. Millikan devised and performed an experiment which used oil droplets to determine the charge of an electron. In fact, it was for this (at least in part) that he received his Nobel Prize. What is unfortunate, though, is that when other scientists replicated his experiment, their human nature in caving to authority began to shine through. When they obtained results that were in line with the experiment, they wouldn’t question them at all. When they obtained results that weren’t in line with the experiment, they often looked for excuses as to what they were doing wrong instead of potential fundamental problems in Millikan’s experiment itself. Thus, despite the inability to replicate results from Millikan’s experiment, nothing was said immediately due to assumptions made by the scientists.

The second example I’d like to share is to demonstrate potential problems with reporting results that are not reproducible. I use this example specifically because I personally believe the people most likely to try to reproduce an experiment are the people in that lab that produced the original research in question. This is because labs are often highly specialized (and thus, a lab investigating hantavirus is more likely to reproduce an experiment about hantavirus than a lab investigating malaria), funding is limited (as mentioned above), and labs generally like to be sure of their science. Well, in a case back in the 1980s, David Baltimore and Thereza Imanishi-Kari were implicated in an event involving results that were not reproducible. After they published an immunology paper, a post-doctorate fellow, Margot O’Toole, in Imanishi-Kari’s lab discovered that she simply could not reproduce the results found by Imanish-Kari. After much trying, she brought the case to the attention of Baltimore, and although he refused to retract the paper, the case eventually escalated to a massive investigation and the paper being retracted due to findings of scientific misconduct. However, despite reporting this scientific misconduct in an effort to better the scientific community, O’Toole lost her job during the incident. In fact, to this day, there is a general fear and negative connotation to being a whistleblower. A potential problem with this is that results that are not reproducible may not immediately come to light.

In short, I still believe that publishing negative results (is not necessary but) is an important step for improving the quality of basic science research. I agree with Alex that publication bias has a more significant effect in the pharmaceutical world, but I do not believe that basic science is immune. I think we need to be aware that true objectivity in any scientist (or any human, for that matter) is a myth; publishing negative results can help paint a full picture of the natural world instead of leaving it to faith that our scientists will clue in that a piece of the puzzle has been inaccurately identified.