[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Another Battle at Radon Pass, Version II



Hi All:

SORRY, THE "FROM" BUG ATE PART OF THE
PREVIOUS MAILING!
++++++++++++++++++++++++++++++++++++

A Comment by Fritz A. Seiler and Joseph L. Alvarez
on the Scientific Method and its Application to the
Discussion on the LNT and on Cohen's Radon Data.

Because of overlapping interests, this message is sent
both to RADSAFE and RISKANAL.

We disagree with Ken Mossman: It is not Bernie's job to show that
his radon data agree with the linear model.  The Scientific Method is
quite clear about the process: The yardstick of success is the
agreement of the model predictions with experimental data.

If  the predictions of the linear model do not agree with the data,
then the modelers loose; it is as simple as that.  Also, the only way
that Cohen's data could be proven wrong  would be by OTHER,
EXPERIMENTAL DATA and nothing else!  And when we say
"other experimental data,"  we specifically exclude numerical results
derived from an ANALYTICAL "experiment" like Lubin's
calculations.  Those numbers are not bona fide data but just the
output of yet another model.  What we really mean is good, honest
experimental information, the kind used by scientists who follow the
Scientific Method.  Lubin's "data" are definitely not of that kind. So,
there is no refutation of Cohen's data by Lubin's arguments.

Therefore, as long as the modelers cannot make predictions that
agree with Cohen's data, taking into account whatever confounding
effects they choose to include, they  loose -- hands down.

This was the idea of the confounding factor argument from the very
start: Save the LNT by inventing a number of  large ( > 20 sigma!)
confounding factors and try, in addition, to put the burden of proof
on Bernie Cohen!  Indeed, a highly convenient solution for the LNT
modelers!  But not exactly according to the rules of a hard science!

So, let us speak clear English here:  Requiring a "correction" of
Cohen's data, as the LNT proponents do, is but an attempt to require
Bernie to "fudge" his data so that they will agree with the LNT model
predictions!

This is not the way we do science, but it is tantamount to requiring a
complete reversal of the Scientific Method and it simply means: Go,
and make the data fit the model!

So, let us get our science straight first!  The only way we can do
"Good Science" is to carefully follow its intent!  Therefore, we once
again will get on our "Scientific Method Soapbox" and attach a
short discussion of it to this comment.  We lifted it directly from the
latest version of our "Ecological Fallacy" paper.

Best regards

Joe and Fritz

>>>>>>>>>>>>>>>>>>>>>>>>>

2.1.  THE SCIENTIFIC METHOD

 The Scientific Method was first formulated by René Descartes in
1637 (Descartes, 1968), and lies at the root of the tremendous
progress made in science and technology since then.  It has been
adapted to remain current with the development of mathematics and
its application to science, but its essence has remained the same: We
use experimental observations to formulate a hypothesis, a theory, or
a model, and then subject these ideas to the test with new
experimental observations.  If the new data consistently confirm the
predictions, we declare a new paradigm and then look for ways to
test it until we discover something new yet again and have to modify
the model or its theory.  If the data consistently contradict the
prediction, we have to drop the paradigm or modify it.  This process
of changing the paradigm has been widely discussed in the literature
(see for instance Popper, 1968; Kuhn, 1970; Bunge, 1979).  The
results vary in their approaches, interpretations and conclusions, but
they all agree on one point: Experimental data are used to either
confirm or reject the predictions of models and theories.

It has been shown that, in order to make this method feasible, the
experimental tests have to be preceded by a modeling effort fulfilling
of a number of preconditions (Bunge, 1979; Lett, 1990; Seiler and
Alvarez, 1994a).  These conditions are components of what is usually
called “Doing Good Science.”   Lett (1990) has put together a set of
five preconditions and one test condition, which summarize recent
approaches to this problem.  In short, they are:

 1. Sufficiency of Observations: Sufficient data have to be available
and serve as the basis to formulate a new hypothesis or model.  A
lack of sufficient data is often overcome by making an additional
hypothesis but that can lead to wrong conclusions if the model consists
of more assumptions than data.  A classical example of initially
insufficient data is the orbit calculation for the Near Earth Asteroid
1997XF11 made two years ago, based on a number of data points
closely spaced in time right after the discovery.  This led to the
prediction of a close encounter or even a possible collision with earth
early in the 21st century (Scotti, 1998).  The discovery of the same
asteroid on some stellar plates taken eight years before, then led to a
combined data set which covered a much larger time interval.  It
improved the precision of the orbit calculation, and the probability of
a close encounter was found to be smaller but still not negligible.  It
finally disappeared for all practical purposes after a more careful
consideration of the uncertainties by three groups of researchers, all
coming to essentially the same result.

2) Replicability of Observations: The same and other scientists must
be able to reproduce the original, experimental data.  Irreproducible
results are often the cause of false models and predictions.  As an
example the original “Cold Fusion” data could not be reproduced by
the authors or any other scientists (Taubes, 1993), and interest in the
effect died away.

3) Comprehensiveness of Data Evaluation: All data in the modeling
range must be compared to the model, not only those which agree
with it. Ignoring any set of experimental data is a highly restrictive
act that demands a high level of justification.  We shall see in the
following discussions whether dismissing the Cohen data set from use
in radon risk assessment can really be justified  [in the paper we
show that it cannot].  Another example of ignoring this requirement
with severe consequences is the way Soviet scientist Trofim Lysenko
suppressed all data that could and would have contradicted his
Lamarckian theory on the inheritance of acquired characteristics by
grain.  His data selectivity led to catastrophic consequences for
Soviet agriculture and its grain harvests for many decades
(Friedlander, 1995).

4) Logical Consistency of Model Approach: The set of assumptions
and properties of a theory or model must be free of internal
inconsistencies.  An example of failing this requirement are the linear
and nonlinear models used in the BEIR III and BEIR V reports on
radiation carcinogenesis in man (NRC, 1980,1990). There, the use of
event doses instead of total accumulated doses for the survivors of the
nuclear attacks on Japan, leads to logical inconsistencies.  Whereas
event doses can be used for the linear model, their use in any
nonlinear model is a mistake.  For those models, total accumulated
doses must be used (Seiler and Alvarez, 1994b).  This mistake
removes the logical foundation for all statements about nonlinear
models in both BEIR reports.

5)  Scientific Honesty: This is a difficult requirement.  We all know
that scientists tend to be faithful to their brainchildren, sometimes
beyond all reasonable doubt (Popper, 1968; Kuhn, 1970).  A scientist
should admit when his model has failed and should go on from there.  An
example is the famous written wager between Stephen Hawking and Kip
Thorne on whether the X-ray source Cygnus-X1 contains a black hole at
its center (Thorne, 1994).  When the experimental evidence became
overwhelming, Hawking conceded in good humor and in writing that
Cygnus-X1 does indeed contain a black hole.
Unfortunately, there is another aspect of scientific honesty that has to

be mentioned here. It concerns the integrity of the data presented or
evaluated by a researcher.  Recently, we have all become aware of
yet another case of alleged scientific fraud by an investigator (Medical

Tribune News Service, 1999).  Similar to Lysenko, this investigator is
accused of having discarded 93% of the data in one case because
they did not agree with his hypothesis. In this context, honesty or the
lack of it is an integral part of a scientist’s reputation.

6) Verification of Model: After these five preconditions are met,
verification is the essence of the Scientific Method: Success or failure

in predicting the outcome of an experiment. If the five pre-conditions
are not met, however, the outcome of such an experiment means little,
if anything. It must also be noted, that all physical measurements are
subject to uncertainties, consisting of both random and systematic
errors. [ Footnote: The term uncertainty is used here as a general
expression, covering both random and systematic errors. This
definition runs counter some recent usage in risk assessment, but it
has now been adopted as U.S. National (ANSI) and international
(ISO) standard terminology (Taylor and Kuyatt, 1993, 1994; Seiler
and Alvarez, 1998b)].  Success or failure of a model prediction can
thus only be established statistically within the experimental errors
for a given confidence level.  Therefore, one experiment alone leads
often only to an increase or decrease of confidence in the model.
What is usually required for a complete loss of confidence in the
model and an eventual change of paradigm is a set of results of the
same experiment and, even better, failures in one or more other types
of experiments.

This time-proven framework is the basis on which we will judge the
data and the theories for the correlation between the radon
concentrations and lung cancer mortality in U.S. counties.>>>>>>>>>

REFERENCES

Bunge, M. 1979.  Causality and Modern Science.  3rd ed. Dover,
New York, NY.

Friedlander, M.W. 1995. At the Fringes of Science, Chapter 11,
Political Pseudoscience:, The Lysenko Affair. Westview Press,
Boulder, CO.

Kuhn, T.S. 1970.  The Structure of Scientific Revolutions.
University of Chicago Press, Chicago, IL.

Lett, J. 1990.  "A Field Guide to Critical Thinking." The Skeptical
Inquirer, Winter.

Medical Tribune News Service. 1999. "Researcher on Power Line
Effects Said to Have Faked Data." Medical Tribune, June 22, 1999.
See also at
http://www.medtrib.com/cgi-bin/medtrib/articles//record?record=1482 .

NRC (National Research Council). 1980.  Committee on the Biological
Effects of Ionizing Radiation (BEIR III), The Effects on Population of
Exposure to Low Levels of Ionizing Radiation. National Academy
Press, Washington D.C.

NRC (National Research Council). 1990.  Health Effects of Exposure
to Low Levels of Ionizing Radiation, BEIR V.  National Academy
Press, Washington, D.C.  Academy Press website with URL:
http://www.nap.edu/readingroom/books/beir6/.

Popper, K.R. 1968.  The Logic of Scientific Discovery. Harper &
Row, New York, NY.

Scotti, J.V. 1998. "Fleeting Expectations: The Tale of an Asteroid."
Sky & Telescope, 96 (1), 30-34.

Seiler, F. A., and Alvarez, J. L. 1994a.  "The Scientific Method in
Risk Assessment." Technol. J. Franklin Inst. 331A, 53-58.

Seiler, F.A., and Alvarez, J.L. 1994b.  "The Definition of a Minimum
Significant Risk." Technol. J. Franklin Inst. 331A, 83-95.

Seiler, F.A., and Alvarez, J.L. 1998b.  "On the Use of the Term
Uncertainty." Hum. Ecol. Risk Assess. 4, 1041-1043.

Taubes, G. 1993. Bad Science: The Short Life and Very Hard
Times of Cold Fusion. Random House, New York, NY.

Taylor, B. N., and Kuyatt,C. E. 1993.  Guidelines for Evaluating
and Expressing the Uncertainty of NIST Measurement Results.
NIST Technical Note, TN 1297, Gaithersburg, MD.

Taylor, B. N., and Kuyatt, C. E. 1994.  Guidelines for Evaluating
and Expressing the Uncertainty of NIST Measurement Results.
Appendix D: Clarification and Additional Guidance.  NIST Technical
Note, TN 1297, Gaithersburg, MD.

Thorne, K.S.  1994. Black Holes and Time Warps: Einstein’s
Outrageous Legacy. Norton & Co., New York, NY.


******************************
Fritz A. Seiler, Ph.D.
Sigma Five Consulting
P.O. Box 1709
Los Lunas, NM 87031-1709, USA
Tel.:        505-866-5193
Fax:        505-866-5197
e-mail:    faseiler@nmia.com
******************************

******************************
Joseph L. Alvarez, Ph,D., CHP
Auxier & Associates
9821 Coghill Rd., Suite 1
Knoxville, TN 37932, USA
Tel.:        865-675-3669
Fax:        865-675-3677
e-mail:    jalvarez@auxier.com
******************************


************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html