[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RISKANAL digest 1353 (Vicki Bier)



Answer to Vicki Bier's RISKANAL Digest 1353
by Fritz A. Seiler and Joseph L. Alvarez

Dear Vicki,

 We knew when we wrote it, that our position on the null hypothesis
would get your attention.  We have been over this ground before,
but as you said:"Here we go again!"  - well, so are we.  We think
that all three of us agree that this topic is a rather important one so
we will try to have another go at it.  Maybe the telegram style of our
explanation in our latest post gives the reader too much leeway in
interpretation.  So here we go again, step by step, explaining why we
think that there is no freedom in selecting the null hypothesis in
science and its applications.
     Let us from the very beginning state what we aim to show first: In
each case we will discuss here there is no doubt what the null
hypothesis has to be.  In every case, it is quite clear what the basic
scientific question of an experiment or a comparison is.  This question
is given by our attitude with regard to a theory or model: Do we still
believe in the old paradigm because there is insufficient data to
abandon it just yet?  Or do we now favor a new paradigm and
are trying to test it?  The selection of the null hypothesis is thus driven
by the situation in which a science finds itself and that situation
influences the experimental questions asked as a consequence.  Now
let us go to the various examples.
     When Newton published his principles, he accomplished several
things: With his three basic principles and the notion of ‘action at a
distance' in his law of gravitation, he explained all the known effects
such as Kepler's laws, the facts of mechanics and so on.  This was
thus a highly successful theory which was soon elevated to the status
of a paradigm.  We need to point out here. that these changes are
never quick one-step processes but are rather long and drawn out
iterative processes for which we have ample history in the form of
books, letters, papers etc.  We are thus aware that we are hugely
oversimplifying the process here, but we have tried to focus on the
essential aspects.  If you want to know more, there is quite a lot of
fascinating reading on the changes of particular scientific paradigms
out there!
     As these laws implied a lot of testable hypotheses, Newton and
many others then put his predictions to the test.  One by one, they
were tested and found to be correct within the errors.  The process
was simply a comparison of his prediction (the new paradigm) with
the experiment and he was right every time.  The success of his
Classical Mechanics paradigm left no doubt about what the question
was going to be and thus what the null hypothesis had to be.
     The most conspicuous of Newton's successes occurred later in
Celestial Mechanics where previously unknown planets were found
by rigorously applying his principles and his law of gravitation to the
position of the newly discovered planet Uranus.  The initial failure of
this effort led to the postulation of yet another planet.  Celestial
Mechanics was then used in order to calculate the position for that
unknown object.  That position was the prediction, and looking
through the telescope was the experiment which discovered the
planet Neptune.  No problem with the null hypothesis here either.
After a while, the new paradigm seems to be correct, so we look
primarily not so much for the effect itself but for deviations from its
predicted value.
     Despite of what a lot of books say, the late 19th century
problems with the propagation of light (Michelson-Morley
experiment etc.) were not known to Einstein, when he postulated
the Special Theory of Relativity in 1905.  He had different and quite
independent reasons for his theory.  But his theory predicted and
then identified correctly that the Fitzgerald Contraction, the Mass
Increase, and the Time Dilatation were manifestations of the same
basic Special Relativity postulate, which leads to the Lorenz
transform, and which implies that c  =  const.
     Now at low speeds, the dynamics according to Newton should
be correct within the errors, and indeed, electrons of a few to a few
hundred eV do behave like classical particles.  However, when you
do precision measurements with 10 keV electrons or so, you will
begin to see deviations from the Classical Mechanics paradigm.  For
the time being, you still stick with the paradigm and continue looking
for more deviations at higher energies.  What you find is that the null
hypothesis becomes more untenable the more the energy increases.
This was one part of the example in our ‘Absence of Proof' post.
     When you have 100 keV electrons or so, your question may
change to: Does the Special Theory of Relativity for unaccelerated
movement still give correct predictions at energies comparable to
the rest mass of the electrons and higher?  (Here we have a typical
extrapolation, Vicki.)  In other words, by now the paradigm shift to
Special Relativity has occurred, and it is found to hold within the
experimental errors, so now you start seeking deviations from it.
        Even before this set of experiments was completely executed
and the results interpreted, Einstein introduced the General Theory
of Relativity which works for accelerated systems also and contains
the Special Theory of Relativity as a special case.  With the largely
validated Special Theory as a special case, some scientists thought
that Einstein could be right, and began to look not so much for
deviations from Classical Mechanics but directly for the effects of
General Relativity and any possible deviations from them.  In this
manner experimental physicists and astronomers found confirmation
for the perihelion advance of Mercury, the redshift in stronger
gravitational fields, and the bending of light paths near the sun.
     And until recently the General Theory has passed all the more
modern tests which were thrown its way.  Now, however, there may
 - and we stress ‘may'  -  be some faint indications that the theory is
giving minutely inexact values in some cases (?!?).  This is yet
another example with a clear-cut null hypothesis.
     In summary then: In each case above there was never any doubt
what the null hypothesis had to be, because it was quite clear what
the basic scientific question of the experiment was.  The question
was in each case clearly driven by the situation in which the
particular field of science found itself and the question which were
asked as a consequence.  In this country, we seem not usually to
be taught to regard an experiment as a scientific question put to
nature, but that is exactly what it is.
     We know that many statisticians and also many risk assessors
claim that they can choose the null hypothesis according to some
criteria they set up for themselves (See Jim Dukelow's post of April
21st  -  to which we will respond a little later  -  for a simon-pure
interpretation of the abstract statistical view of that problem!   :-).
Also, these statisticians and risk assessors would understandably
hate to lose their freedom in selecting a null hypothesis at their will
or whim.  However, we claim otherwise: In science, there is no
such freedom.
     In addition to the arguments given above about the scientific
question, there is another reason, and it is a very simple one: When
you require a 90% or 95% confidence level for ‘statistical
significance' you are stacking the deck dramatically in favor of the
null hypothesis (one in ten and one in twenty, respectively, instead
of an even break of 50:50!).  This kind of an extreme bias can only
be justified if you are on relatively safe grounds and do not want to
leave it unless you have to: A theory that works well and which you
will not give up lightly, or something you think you know for sure.
These are exactly the criteria we talked about above.
     As another example, we know that the excess risk for an excess
dose zero is by definition zero.  A lot of people seem to have some
trouble with this simple fact.  That is maybe because they do not give
sufficient thought to the meaning of the two qualifiers "excess" in that
sentence.  These qualifiers actually make the statement almost trivial.
You start at any dose D and declare the difference to that dose as an
excess dose.  You now start with the risk r(D) at dose D and declare
any difference in risk at the excess dose to be an excess risk.  Then
you know for sure that, due to that definition, the excess risk must be
zero at an excess dose of zero.  For any excess dose above zero, the
excess risk can be zero, positive, or negative.  This, by the way, was
part II of our example in our "Absence of Proof" post.
     Now, this is the situation with most, if not all, health effects as a
function of exposure.  It is often advantageous to set the dose D to
the background dose and to look for a nonzero excess risk.
Clearly, the null hypothesis is again given by what we know, namely
that at excess dose zero the excess risk is zero.  As we stated
before, there is no room for selection, the situation dictates what the
null hypothesis has to be.
     These are the first points that we have made in favor of a lack
of freedom in selecting a null hypothesis, and they were strictly based
on scientific reasons.  However, there is another point of view for the
same argument, and this one is more political and thus much more
powerful: Let us ask ourselves: 1) How would we like to have to
justify and defend the arbitrariness of selecting a nonzero toxic effect
as a null hypothesis, a choice with a tremendous bias in favor of
something we have yet to demonstrate?  2) How would we defend
this decision against the accusation that we are trying to unduly
influence the statistical decision up front by an ‘appropriate' selection
of the highly favored null hypothesis h (0) ?  3) How can we justify
the usually tremendous cost of a decision based on a null hypothesis
h (0) selected in that manner?
     We have always felt that these aspects are something that the
classical rationales of most statisticians do not recognize at all (See
again Jim Dukelow's post of April 21st ).  However, statisticians
quite properly try to make their theories independent of the facets
of a particular case.  Consequently, it is up to us users to make the
proper allowances and insert the caveats needed.  Once you think
about these aspects for a while, you may well agree with us when we
say that neither one of us would like to have to think about mounting
such a defense.  Actually, we would like to be on the other side of
the argument, criticizing the decision made, its cost, and all the while
casting aspersions on the motives which led to that particular choice.
   :-))       ;-)
     There is yet a third point that needs to be made.  Let us consider
what we do when we define the alternate hypothesis h (1).  What we
often do is to say that it is something like the opposite of h (0), i.e.,
its complement.  We interpret your stated position, Vicki, as lying
somewhere along the lines: let us put the Seiler-Alvarez null
hypothesis into the position of h (l), and starting with the null
hypothesis that the substance is toxic at an unspecified level  (and
please excuse us if we are wrong!).  This is tantamount to proposing
a new paradigm, but there is nothing to back that paradigm up in
any detail.  So you now are faced with an unenviable choice: Either
you make up such a law out of whole cloth and then we will both
yell "Hypotheses non fingo!" and charge to the attack, or you then
leave it somewhat vague, and then we will both go on the offensive
again, armed with the arguments given above.     :-)
     We have always found that in scientific problems, the alternate
hypothesis h (1) is hard to justify unless you compare two different
fully worked out theories.  But then this is really a moot point,
because you can learn much more by testing each model separately
against the experimental data, and then comparing the results.
     Now let us see if we are able to defend the choice we have
proposed.  We find that it is easily defensible by pointing out that
our null hypothesis is based on something solid, something we know
or at least that we think we know in a scientific sense: a known fact
or a paradigm.  We only give up something as solid as that in the
face of strong indications to the contrary.  Thus, for these reasons
we are able justify the bias inherent in the null hypothesis, a bias
which is due to the high values of confidence limits which were once
rather arbitrarily set by R.A. Fisher.
     Also, as we pointed out in previous posts, lowering the confidence
limits is also possible but it must again be justified carefully.  Of course,
the bias disappears at p = 0.5, and the result is an even chance
between h (0) and h (1).  This is usually not very helpful, and as our
bias can go either way now, we have to justify every step we take in
favor of one or the other hypothesis.
     We both realize that it will take more than a moment's thought to
understand this situation, because these aspects are not taught in
Statistics 201 nor in Statistics 502 either, probably because such
problems belong more in the realm of science or in the area of the
pursuit of risk politics, not in the sphere of pure statistics.  But then
it has always been a little bit risky to translate a finding of pure
science properly into an application for a specific case.

Best regards,

Joe and Fritz

*************************

Fritz A. Seiler, Ph.D.
Principal
Sigma Five Associates
P.O. Box 14006
Albuquerque, NM 87191-4006
Tel.    505-323-7848
Fax.    505-293-3911
e-mail: faseiler@nmia.com

**************************

Joe Alvarez, Ph.D., CHP
Auxier & Associates, Inc.
10317 Technology Dr., Suite 1
Knoxville, TN 37932
Phone (423)675-3669
FAX: (423)675-3677
Email: jalvarez@auxier.com

****************************

************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html