[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Absence of Proof...



Another Comment by Fritz Seiler and Joe Alvarez on the Statement
"Absence of Proof does not Constitute Proof of Absence."

     By quoting that adage with regard to the EMF discussion, which has
just reappeared like a ritornello, Jim Dukelow  -  as several times
before  -  has managed to touch one of the sore spots in our
consciousness.  So here we go again!  (Did you do that on purpose, Jim?
The mailing list has been rather quiet and peaceful lately!?!?
:-))        :-).
     Let us start with the simple things: Except for the theorems of
mathematics, SCIENCE HAS NEVER PROVEN ANYTHING!  Our science consists of
a large set of more or less successful models.  They are used more or
less confidently because they can predict the outcome of experiments
within more or less reasonable statistical limits.  These limits are
given by the combined uncertainties of prediction and measurement.
Science, and statistics in particular, can thus show no more than
compatibility or incompatibility of a prediction with the experimental
data.  No more, no less.
     Markus Fierz, the teacher of Theoretical Physics and Natural
Philosophy for one of us (FAS), used to say: "No one can prove that the
sun will rise again tomorrow morning.  However, it would be  -  to put
it mildly  -  unwise not to be prepared for it."  The important point
here is the difference between proof and expectation.  There is an
exceedingly small but nonzero probability, let us call it ‘epsilon',
that some violent astrophysical event will occur tonight which makes
tomorrow's sunrise, and thus also the ‘proof of its occurrence',
impossible or pointless.  Aside from that, we enjoy an expectation of  (
1 - epsilon )  which is smaller than 1 but exceedingly close to it, that
we will see the sun rise again tomorrow morning.
     Basically, the statement "Absence of Proof is not Proof of Absence"
is a truism, and the problem is largely created by the unscientific
manner in which it is used.  It is usually meant to imply that the
effect is quite likely to be there but is just too small to be seen.  In
the worst case, it is often quoted in support of the linear model, and
implies that although we cannot see it, it is not only there but we also
know its behavior as the dose decreases.  Now, this statement is not a
scientific statement; it is actually an "anti-scientific" statement.
First, because it claims that science is capable of proof, which it is
definitely not; and Second, because it claims that we know something
that we cannot possibly know; and Third, because trying to furnish
‘Proof of Absence' runs counter the basic intent of the Scientific
Method.  The Scientific Method looks for deviations from the null
hypothesis, and thus from the current paradigm, and decides with a given
level of confidence whether they actually can exist.  Only this kind of
evidence, confirmed by others, will subsequently lead to a change in
paradigm.
     In particular, the Scientific Method is based on something like
five prerequisites for the data available and the models based on them
(Seiler, F.A., and J.L. Alvarez, "The Scientific Method in Risk
Assessment," Technol. J. Franklin Inst., 331A, 53-58, 1994).  Then, and
only then, is a comparison between model prediction and experiment
possible in a meaningful manner.  A model can either be based on the
properties of the data available directly for the object of the model
(such as a model for the angular correlation between the betas and the
coincident gammas in beta decay), or it can be based on more general
data and consist of a hypothetical assumption based on experience (such
as the properties of the known quarks leading to the successful
postulation of the ‘Top Quark').  In neither case is the hypothesis
generated by making some random, frivolous assumption without scientific
merit which is then put to the test.  That kind of an attitude leads
down the path to the totally baseless debates on ‘How many angels can
dance on the point of a pin?'  It is also why Newton made his famous
disparaging remark "hypotheses non fingo".
     Quite to the contrary, making a scientific model and thus a
hypothesis is a considerable, painstaking effort:  One, it must be based
on what we know and not on what we do not know; Two, if possible, it
needs to explain within the errors the data we already have; and Three,
it must be able to make a testable prediction.  As an example, our
everyday engineering experience shows that Newton's Classical Mechanics
works extremely well for energies small compared to the energy
equivalents of the rest masses involved.  Einstein's Special Theory of
Relativity is much more complex, but correctly reproduces all the
low-energy data of Classical Mechanics.  In addition, however, it makes
correct predictions for the case of high energies, where the kinetic
energies become comparable to the energy equivalents of the rest masses
involved.  Similarly, the General Theory of Relativity correctly
reproduces both Classical Mechanics and Special Relativistic Mechanics.
Note in this context that the often heard statement that ‘Einstein
proved Newton wrong' is quite misleading; Einstein's theory is just a
more general theory which confirms Newton's theory in the cases where
the latter is applicable.  We think it is important that we realize that
science is a long sequence of ever better model approximations.  It is
no less than that, but it also cannot be more than that.
     From a statistical point of view, when you do science, you have
absolutely no freedom in the selection of the null hypothesis.  In
science, there is no judgement call involved, such as those many
statisticians claim to make in their analyses.  Without exception, what
we think we know already, i.e., the present paradigm, is always what
defines the basis for the null hypothesis: It is either a new model
based on known experimental data or it is based on a new additional
assumption trying to explain the data.  In the experiment, we look for
deviations from the prediction values.  As long as the experimental data
for the deviations are repeatedly compatible with zero within the error,
we leave the paradigm unchanged.  We only change it when nonzero data
force us to change it.  Science thus consists of a set of paradigms
which we use until something better comes along.  Unfortunately, many
people, and among them many scientists, do not understand this
conditional nature of our scientific models.
     Examples for the finding of such deviations from the old paradigm
were the increase in mass experienced by a body moving without
acceleration at high speeds in the model of Special Relativity, and the
increase in the relative excess cancer risk experienced by humans
exposed to low-LET radiation.  In both cases, the initial facts were
clear: we knew that the mass increase lies near zero at near zero speed,
and we also knew that the excess cancer risk lies near zero at an excess
dose near zero.  This observation clearly and unambiguously told us what
the null hypothesis had to be.  It had to be assumed that the excess is
zero until it can be shown to be nonzero at a given confidence level.  A
more general approach has recently led to the fact that all cause-effect
relationships have such logical thresholds; below these thresholds we
must assume that the excess effect is still zero (Seiler, F.A., and J.L.
Alvarez, "Toward a New Risk Assessment Paradigm: Logical Thresholds in
Linear and Nonlinear Risk Models," Technol. J. Franklin Inst., 335A,
23-30,  1998).
     In addition, it is important to realize that, if we do not use the
paradigm as the null hypothesis, we imply that, based on a model
prediction, we can tell whether the data are right or wrong.  A
classical example of this kind of ‘inverted science' is the discussion
that has been raging on about Bernie Cohen's correlation data between
lung cancer mortality and radon exposures for the U.S. population
(Cohen, B.L.  Test of the linear-no threshold theory of radiation
carcinogenesis for inhaled Radon decay products. Health Phys.
68:157-174; 1995.).  In this case, the linear model proponents tell us
that Bernie Cohen's experimental data must be wrong because they do not
agree with their model!  Talk about getting the cart in front of the
horse!  What the Scientific Method does is to test models by performing
experiments and comparing the results with the model predictions.  The
decisive priority goes to the experimental data and not to the models!
(Seiler, F.A., and J.L. Alvarez, "Is the' Ecological Fallacy' a
Fallacy?" submitted to Hum. Ecol. Risk Assessment.  MS available from
the authors.).
     Having said all of this, let us now look again at the statement
"Absence of Proof is not Proof of Absence."  We already mentioned the
erroneous use of the concept of scientific "Proof".  So let us now look
at an equivalent statement without introducing the notion of proof.  It
could go like: "Although we cannot demonstrate the presence of the
effect experimentally, it could be there anyway."  Let us consider this
sentiment by facing a basic fact: As long as we cannot measure a
quantity above the ever present noise, its value can  assume just about
any shape as long as the effects are compatible with the errors of the
zero effect.
     As an example let us again look at Bernie Cohen's radon/lung cancer
data and all the other data available in the low exposure region.
Cohen's data with their small random errors can clearly show a U- or
J-shape, and neither the other lung cancer/radon data with their huge
random errors nor the BEIR VI model prediction can possibly contradict
them (see Fig. 1 in: Seiler, F.A., and J.L. Alvarez, "Is the' Ecological
Fallacy' a Fallacy?").  Here, we clearly see that at low exposures
another mechanism is taking over and causes the nonlinear U- or
J-shape.  This nonlinear property flatly contradicts the claim, which is
voiced much too often  -  although without a shred of real scientific
evidence  -  that linearity persists down to zero exposure.  In
addition, we should again remember that, according to the Scientific
Method, it is up to the linear model proponents to show that their model
is compatible with the data, not the other way around.
     In this context, we should also take into account that nobody has
successfully attacked the validity of Cohen's measurements; only their
applicability as a dose-effect relationship has been questioned.  Here,
we have yet to see the values of the BEIR VI model, appropriately
corrected for confounding effects, leading to numerical values that lie
anywhere near the Cohen measurements.  All we hear and read about is a
lot of irrelevant qualitative speculation, and that is definitely not
what the Scientific Method is all about.
     So we come to the result that "if you cannot demonstrate the
existence of an effect, you know nothing about what that effect might do
as a function of dose and still be compatible with the errors of the
experimental data."  In view of the failure of the linear model to
predict the low-level Cohen data, we would be ill advised to persist in
postulating that we know that the effect depends linearly on radiation
dose in that region.  This kind of a statement would not only be
unscientific but would actually be dishonest because in the case of
radon and lung cancer we are now aware of something we did not know
before.
     For other relationships, one could at best call that kind of
statement a rank speculation without any scientific basis.  Finally we
need to be aware that exactly the same argument can be made in the case
of the EMF data.  But then, many people, and among them both scientists
and regulators, have considerable problems with admitting that they do
not know something which could be relevant.  When they then claim to
know it anyway, their statements are not based on science but they are
-  at best  -  ‘misstatements' which are personally, politically, or
pecuniarily motivated.


*************************

Fritz A. Seiler, Ph.D.
Principal
Sigma Five Associates
P.O. Box 14006
Albuquerque, NM 87191-4006
Tel.     505-323-7848
Fax.    505-293-3911
e-mail: faseiler@nmia.com

**************************
_______________
Joe Alvarez, Ph.D., CHP
Auxier & Associates, Inc.
10317 Technology Dr., Suite 1
Knoxville, TN 37932
Phone (423)675-3669
FAX: (423)675-3677
Email: jalvarez@auxier.com

****************************

************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html