[ RadSafe ] Article: No to Negative Data
crispy_bird at yahoo.com
Wed Sep 3 20:47:54 CDT 2008
I read this article some time ago. While the subject matter is orientated toward the life sciences, I think the topic is valid through science.
THE SCIENTIST Volume 22 | Issue 4 | Page 39
No to Negative DataWhy I believe findings that disprove a hypothesis are largely not worth publishing.
The problem with these types of negative results is that they don't actually advance science.
A frequent criticism in biology is that we don't publish our negative data. As a result, the literature has become biased towards papers that favor specific hypotheses (Nature, 422:554—5, 2003). Some scientists have become so concerned about this trend that they have created journals dedicated to publishing negative results (e.g., Journal of Negative Results in Biomedicine). Personally, I don't think they should bother.
I say this because I believe negative results are not worth publishing. Rest assured that I do not include drug studies that show a lack of effectiveness towards a specific disease or condition. This type of finding is significant in a societal context, not a scientific one, and we all have a vested interest in seeing this type of result published. I am talking about a set of experimental results that fail to support a particular hypothesis. The problem with these types of negative results is that they don't actually advance science.
Science is a set of ideas that can be supported by observations. A negative result does not support any specific idea, but only tells you what isn't right. Well, only a small number of potential hypotheses are correct, but essentially an infinite number of ideas are not correct. I don't want to waste my time reading a paper about what doesn't happen; I'd rather read just those things that do happen. I can remember a positive result because I can associate it with a specific concept. What do I do with a negative one? It is hard enough to follow the current literature. A flood of negative results would make that task all but impossible.
Although publishing a negative result could potentially save other scientists from repeating an unproductive line of investigation, the likelihood is exceeding small. The number of laboratories working on the exact same problem is relatively small, and thus the overlap between scientific pursuits at the experimental level is likely to be miniscule. It is a favorite conceit of some young scientists that they are doing the next great experiment, and if it doesn't work, then the world needs to know. Experience suggests otherwise.
Twenty-five years ago, I tried to publish a paper showing that thrombin did not stimulate cells by binding to its receptor. Using a combination of computer models and experiments, I showed that the receptor hypothesis was clearly wrong. The paper detailing this negative result was emphatically rejected by all journals. I was convinced that the status quo was threatened by my contrary finding. However, what I failed to do was replace a hypothesis that was wrong with one that was correct.
Negative results can also be biased and misleading in their own way, and are often the result of experimental errors, rather than true findings. I have fielded questions from investigators who could not reproduce my results due to the lack of a critical reagent or culture condition. Similarly, I have not been able to reproduce the results of other scientists on occasions, but I don't automatically assume they are wrong. Experimental biology can be tricky, and consistently obtaining results that support a hypothesis can be challenging. It's much easier to get a negative result and mistake a technical error for a true finding.
Although I believe negative findings do not merit publication, they are the foundation of experimental biology. Positive findings are always built from a vastly greater number of negative results that were discarded along the way to publication. And certainly, if scientists feel pressure to publish positive data, it stands to reason that some of those positive data are wrong. The solution to that bias is to treat published results more skeptically. For example, we should consider all published reports the same way we consider microarray data. They are useful in the aggregate, but you should not pay much attention to an individual result.
Even if literature bias exists regarding a particular hypothesis, positive results that are wrong eventually suffer the fate of all scientific errors: They are forgotten because they are dead ends. Unless new ideas can lead to a continuous series of productive studies, they are abandoned. The erroneous thrombin receptor hypothesis that I tried so hard to disprove was rapidly abandoned several years later when the correct model was introduced (it clips a specific protein).
Steven Wiley is a Pacific Northwest National Laboratory Fellow and director of PNNL's Biomolecular Systems Initiative.
It is also a good rule not to put overmuch confidence in the observational results that are put forward until they are confirmed by theory.
John Jacobus, MS
Certified Health Physicist
e-mail: crispy_bird at yahoo.com
More information about the RadSafe