[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Absence of Evidence II



Comment by Fritz Seiler and Joe Alvarez on the
Letters of Response to our recently voiced dictum
that "Absence of Evidence is Evidence of Absence"

Tony Cox and several others took issue with our statement
that, if a statistical test is negative, it implies something about
the lack of toxicity.  They all quoted in more or less detail
the standard statistical statement to the effect that the inability
to reject the null hypothesis does not imply that the null
hypothesis is correct.  Tony Cox went a bit further into
details and quoted the definition of the term ‘hypothesis'
... and therein  -  as we shall show  -  lies the rub!

But first, let us make some corrections: First, we never used
the words "true" or "truth" in connection with a hypothesis,
nor did we talk about "confirmation of no toxicity".  Such
statements are completely against the spirit of statistics.
Also, they are not only not logical but they are unscientific
 The Scientific Method does not help us to seek for the
"truth", it just helps to confirm or reject models and
assumptions in a scientifically appropriate manner.  And by
"scientifically appropriate manner" we mean that it
addresses all aspects of uncertainty, sensitivity,
completeness, and honest assessment.

Quite generally, absolute statements such as ‘true' and
‘false' are slippery notions even in philosophy and should
be avoided at almost any cost in science.  Science never
proves anything, it makes models and assumptions that
are approximations to the truth, whatever that is. Many
of these approximations are so incredibly good that some
people tend to confuse them with the "truth".  For our part,
we just call them facts.  Along these lines of scientific inquiry,
we think that the quote from a toxicologist about
"demonstrating absolute safety" or other absolute statements
such as the demand for "zero risk" are, to say it politely, ill-
advised.

Let us now look at the definition of the term ‘hypothesis':
"Supposition made as basis for reasoning , without assumption
of its truth."  This is of course the philosophical definition, and
we all know that, together with absolutes such as ‘truth', it
leads to such irresolvable paradoxes as "All Cretans are liars;
I am a Cretan!"  Now, how does this definition work in
science?  Well, Tony, your venerable fellow countryman
Isaac Newton said it best: "Hypotheses non fingo!" or "I do
not make hypotheses!"  Of course he meant the hypotheses
of philosophy.  They are totally arbitrary and can encompass
just about anything, and are, in Newton's opinion, totally
useless in science.  After all, we have neither the time nor the
inclination for philosophical discussions of the type "How
many angels can dance on the point of a pin?"

Unfortunately, statistical theory used the term ‘hypothesis' in
developing its tests .  And here we are in full agreement as
far as abstract statistics is concerned.  Also, within this
framework, all the statements made by Tony Cox and Clark
Carrington are correct.  After all, it is what statistics teaches
us.  However, in this context the question arises: "Is this way
of stating the problems appropriate for the application of
statistics to scientific decisions?"  Well, it is exactly here, in
the application of the Scientific Method that problems occur.
If the term ‘hypothesis' is used at all, the method requires that
the hypothesis has to be justified or at least that it has to be
in agreement with the facts we know.  Note that these
additional conditions are completely contradictory to the
lexicon definition of hypothesis given above.  We are thus
saddled with two different entities called ‘hypothesis'!  We
need to distinguish between them, if by nothing else then
by the appropriate selection of the null and alternate
hypotheses.

In toxicology and from a scientific point of view, the main
question is: What do we know a priory about an agent?
About its toxicology, nothing; but about its excess risk in
any test at dose zero we know that it has to be zero (by
definition).  So, for zero dose the excess risk is zero, and
that is not a hypothesis but an incontrovertible fact.  If you
want to call that the ‘null hypothesis' or anything else
(response model at zero dose, for instance), that is quite
all right.  Now add a little agent exposure, and ask "what
is the null hypothesis now, and what is the alternate
hypothesis?"  The Scientific Method allows only an answer
which is based on facts.  In this case it would be: "The
toxicity in this particular test is either still zero or it is not."
Note that the two possibilities are mutually exclusive and
complementary.  Thus the probability of one OR the other
coming to pass is unity.  This demonstrates that the term
hypothesis in science is severely restricted because the
question is almost always: "Are model prediction and
measurement in agreement or not?"  May be a different
term than hypothesis should be used, but we think it is far
too late for that.  All we can do is to make it clear by the
choice of the null and alternate hypotheses that we do
understand the situation and practice Good Science.

Consequently the statistical test in the framework of the
Scientific Method has a different interpretation.  If the
test statistic has a value V which is just significant at level
alpha and therefore leads to a confidence level 1 - alpha,
this means that the null hypothesis has a probability of alpha
for yielding the value V of the test statistic and the alternate
hypothesis has a probability 1 - alpha of yielding the value
V.  So, contrary to what Clark Carrington thinks, this
interpretation holds not only for alpha equals 0.5, but holds
for any value of alpha, i.e., for any bias.  The reason for this
simple property is the restricted definition of the two
hypotheses.

Now let us return to the question of whether an exposure to
a small dose of the agent leads to a nonzero toxic effect.  In
the immediate neighborhood above zero dose and way
below the threshold of detection, a toxic effect cannot be
measured.  Thus even without statistics it is clear that the
null hypothesis is still in force, and by this we mean that the
significance alpha of the null hypothesis is still close to 1.  As
the dose increases, alpha will decrease and at some even
higher dose, alpha will drop below a preset value alpha*,
somewhere between 0.5 and almost zero, and we can
declare evidence for toxicity at a confidence level 1 - alpha*.
Quite clearly, evidence for toxicity here means that the
complement is evidence against toxicity.  So in this
framework we declare again:

Absence of Evidence is Evidence of Absence!


P.S.        We also wanted to write a short answer to Vickie Bier's
               comment.  But you said it all so well, Tony,  that we do
               not have to jump in at all.

P.P.S.    We can already hear the many "But, ... but  ...  ."  So let
              us take some of the buts on now:

                 Is it possible to make a mistake?  Yes
                 Is it possible to falsify results?  Yes
                 Could the investigator be incompetent? Yes
                 Could we have asked the wrong question? Yes
                 Could the test chosen be too weak?  Yes

Of course, we assume that all the usual checks and balances of the
Scientific Method are in place and will sooner or later correct any
errors of omission and commission.

But does the clause ‘sooner or later' not mean that, for the time being,
we should assume guilt until there is proof of innocence?  Definitely
not!  That would be unscientific because you assume something you cannot
possibly know.  Also, this is a typical philosopher's hypothesis and
those  -  as we said before  -   are strictly for the birds.   :-)



************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html