[ RadSafe ] Article: No to Negative Data
HOWARD.LONG at comcast.net
HOWARD.LONG at comcast.net
Fri Sep 5 17:01:18 CDT 2008
Proving the absence of anything (God, cancer, etc) is very difficult.
However, I will try when I have descended from my elation over the prospects of repeal of some socialist slave-shackles on TV this week.
Howard Long
-------------- Original message --------------
From: "Livesey, Lee M" <Lee_M_Livesey at RL.gov>
> And all of this from an "empircal" science based upon inferences of what is not
> actually observed......
>
> -----Original Message-----
> From: John Jacobus [mailto:crispy_bird at yahoo.com]
> Sent: Thursday, September 04, 2008 07:11 PM Pacific Standard Time
> To: Dan W McCarn; 'radsafe'; HOWARD.LONG at comcast.net
> Subject: RE: [ RadSafe ] Article: No to Negative Data
>
> Dr. Long,
> More importantly, the study was flawed. Bad data is bad data, but knowing that
> does not seem to bother you.
>
> Did you ever get a copy of that PNAS paper?
>
> +++++++++++++++++++
> It is also a good rule not to put overmuch confidence in the observational
> results that are put forward until they are confirmed by theory.
> Arthur Eddington
>
>
> -- John
> John Jacobus, MS
> Certified Health Physicist
> e-mail: crispy_bird at yahoo.com
>
> --- On Thu, 9/4/08, HOWARD.LONG at comcast.net wrote:
>
> From: HOWARD.LONG at comcast.net
> Subject: RE: [ RadSafe ] Article: No to Negative Data
> To: "Dan W McCarn" , crispy_bird at yahoo.com, "'radsafe'"
>
> Date: Thursday, September 4, 2008, 12:18 PM
>
>
>
> Viva publication of negative results, like the Nuclear Shipyard Worker Study.
>
> Only by re-analysis of data (previously one-tailed to show only absence of harm
> from >0.5 rem exposure) of this "negative" study, was Cameron, a member of its
> Advisory Board able to show
> positive benefit : total mortality reduced to 0.76, cncers similarly reduced.
>
> Beware standardization (especiallyin health care)
>
> Howard Long
>
> -------------- Original message --------------
> From: Dan W McCarn
>
> > <> what isn't right. Well, only a small number of potential hypotheses are
> > correct, but essentially an infinite number of ideas are not correct.>>
> >
> > Dear John:
> >
> > Hogwash! Whose paradigms do you live with? Can there be multiple paradigms
> > for which data are applicable? Can different sets of hypotheses be
> > developed for each paradigm?
> >
> > Any scientist focused on placing a structure around empirical observations
> > is faced with this dilemma - I have taken data from thousands of dry oil &
> > gas exploration wells (very negative results for an O&G paradigm) turned it
> > sideways and gained understandin g about where I might explore for uranium (a
> > very different paradigm). I have worked on databases that incorporate
> > complex information from almost 100,000 boreholes, most of them essentially
> > "dry" holes, to provide an integrated approach to management of these data.
> >
> > << Although publishing a negative result could potentially save other
> > scientists from repeating an unproductive line of investigation, the
> > likelihood is exceeding small. >>
> >
> > Again Hogwash!
> >
> > Please don't let me interfere with your ideas or Dr. Wiley's here, but most
> > critical mineral deposit discoveries - as well as oil and gas - are based on
> > what might previously have been considered negative data, observations meant
> > to prove or disprove one or another hypothesis in a different paradigm, or
> > simply observational data for which the answers still lie shrouded (the
> > explorat ion budget ran dry) until the right mind comes along, adds a piece
> > or two of additional data and understands the order a little better. I can
> > start with the uranium deposits at Ambrosia Lakes as well as deposits in the
> > Gas Hills in Wyoming. These were not discovered until a different paradigm
> > was applied to the old data.
> >
> > I had the fortune once to explore a major basin in Southern Colorado that
> > was long thought devoid of uranium, until I found an ancient publication
> > (Siebenthal, 1910) whose careful and detailed observations allowed me to
> > conceptually integrate the data that I had, and understand the major
> > features and processes controlling uranium mineralization in the basin and
> > to identify a major target. As my boss said, "Thank God your stubborn"
> > because I had to overcome the mindsets and preconceptions of every other
> > geologist in the office.
> >
> > Perha ps in my industry, sharing of negative results is considered so
> > extremely important that a side-industry has long-since emerged to
> > successively insure future exploration efforts don't re-invent the wheel by
> > providing these "negative" data.
> >
> > Maybe the geological sciences learned early-on that exploration was an
> > open-ended venture where no one had a complete understanding of what the
> > future might bring. Since most exploration produces negative results (except
> > for the value of the empirical data), geologists must be and are eternally
> > optimistic about future chances (and different paradigms, not just
> > hypotheses) and their results are maintained for the next effort.
> > Pessimistic geologists never find anything!
> >
> > Dan ii
> >
> > Dan W. McCarn, Geologist; 3118 Pebble Lake Drive; Sugar Land, TX 77479; USA
> > Home: +1-281-903-7667; Austria-cell: +43-676-725-6622 >
> HotGreenChile at gmail.com UConcentrate at gmail.com
> >
> >
> > -----Original Message-----
> > From: radsafe-bounces at radlab.nl [mailto:radsafe-bounces at radlab.nl] On Behalf
> > Of John Jacobus
> > Sent: Wednesday, September 03, 2008 8:48 PM
> > To: radsafe
> > Subject: [ RadSafe ] Article: No to Negative Data
> >
> >
> > I read this article some time ago. While the subject matter is orientated
> > toward the life sciences, I think the topic is valid through science.
> >
> > THE SCIENTIST Volume 22 | Issue 4 | Page 39
> >
> >
> > No to Negative DataWhy I believe findings that disprove a hypothesis are
> > largely not worth publishing.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > The problem with these types of negative results is that they don't actually
> > advance science.
> >
> >
> >
> > A frequent criti cism in biology is that we don't publish our negative data.
> > As a result, the literature has become biased towards papers that favor
> > specific hypotheses (Nature, 422:5545, 2003). Some scientists have become
> > so concerned about this trend that they have created journals dedicated to
> > publishing negative results (e.g., Journal of Negative Results in
> > Biomedicine). Personally, I don't think they should bother.
> >
> > I say this because I believe negative results are not worth publishing. Rest
> > assured that I do not include drug studies that show a lack of effectiveness
> > towards a specific disease or condition. This type of finding is significant
> > in a societal context, not a scientific one, and we all have a vested
> > interest in seeing this type of result published. I am talking about a set
> > of experimental results that fail to support a particular hypothesis. The
> > problem with these ty pes of negative results is that they don't actually
> > advance science.
> >
> > Science is a set of ideas that can be supported by observations. A negative
> > result does not support any specific idea, but only tells you what isn't
> > right. Well, only a small number of potential hypotheses are correct, but
> > essentially an infinite number of ideas are not correct. I don't want to
> > waste my time reading a paper about what doesn't happen; I'd rather read
> > just those things that do happen. I can remember a positive result because I
> > can associate it with a specific concept. What do I do with a negative one?
> > It is hard enough to follow the current literature. A flood of negative
> > results would make that task all but impossible.
> >
> > Although publishing a negative result could potentially save other
> > scientists from repeating an unproductive line of investigation, the
> > likelihood is exceeding small. The number of laboratories working on the
> > exact same problem is relatively small, and thus the overlap between
> > scientific pursuits at the experimental level is likely to be miniscule. It
> > is a favorite conceit of some young scientists that they are doing the next
> > great experiment, and if it doesn't work, then the world needs to know.
> > Experience suggests otherwise.
> >
> > Twenty-five years ago, I tried to publish a paper showing that thrombin did
> > not stimulate cells by binding to its receptor. Using a combination of
> > computer models and experiments, I showed that the receptor hypothesis was
> > clearly wrong. The paper detailing this negative result was emphatically
> > rejected by all journals. I was convinced that the status quo was threatened
> > by my contrary finding. However, what I failed to do was replace a
> > hypothesis that was wrong with one that was correct.
> >
> > Negative results can also be biased and misleading in their own way, and are
> > often the result of experimental errors, rather than true findings. I have
> > fielded questions from investigators who could not reproduce my results due
> > to the lack of a critical reagent or culture condition. Similarly, I have
> > not been able to reproduce the results of other scientists on occasions, but
> > I don't automatically assume they are wrong. Experimental biology can be
> > tricky, and consistently obtaining results that support a hypothesis can be
> > challenging. It's much easier to get a negative result and mistake a
> > technical error for a true finding.
> >
> > Although I believe negative findings do not merit publication, they are the
> > foundation of experimental biology. Positive findings are always built from
> > a vastly greater number of negative results that were discarded along the
> > way t o publication. And certainly, if scientists feel pressure to publish
> > positive data, it stands to reason that some of those positive data are
> > wrong. The solution to that bias is to treat published results more
> > skeptically. For example, we should consider all published reports the same
> > way we consider microarray data. They are useful in the aggregate, but you
> > should not pay much attention to an individual result.
> >
> > Even if literature bias exists regarding a particular hypothesis, positive
> > results that are wrong eventually suffer the fate of all scientific errors:
> > They are forgotten because they are dead ends. Unless new ideas can lead to
> > a continuous series of productive studies, they are abandoned. The erroneous
> > thrombin receptor hypothesis that I tried so hard to disprove was rapidly
> > abandoned several years later when the correct model was introduced (it
> > clips a specif ic protein).
> >
> > Steven Wiley is a Pacific Northwest National Laboratory Fellow and director
> > of PNNL's Biomolecular Systems Initiative.
> >
> >
> > +++++++++++++++++++
> > It is also a good rule not to put overmuch confidence in the observational
> > results that are put forward until they are confirmed by theory.
> > Arthur Eddington
> >
> >
> > -- John
> > John Jacobus, MS
> > Certified Health Physicist
> > e-mail: crispy_bird at yahoo.com
> >
>
>
>
>
> _______________________________________________
> You are currently subscribed to the RadSafe mailing list
>
> Before posting a message to RadSafe be sure to have read and understood the
> RadSafe rules. These can be found at: http://radlab.nl/radsafe/radsaferules.html
>
> For information on how to subscribe or unsubscribe and other settings visit:
> http://radlab.nl/radsafe/
> _______________________________________________
> You are currently subscribed to the RadSafe mailing list
>
> Before posting a message to RadSafe be sure to have read and understood the
> RadSafe rules. These can be found at: http://radlab.nl/radsafe/radsaferules.html
>
> For information on how to subscribe or unsubscribe and other settings visit:
> http://radlab.nl/radsafe/
More information about the RadSafe
mailing list