[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Absence of Proof ...
When I got around to reading the last half dozen or so RADSAFE Digests, I was a
little puzzled to see that Fritz Seiler and Joe Alvarez had posted to RADSAFE a
response to a message I sent to RISKANAL, a mailing list for risk professionals.
Oh, well, here is my response to their response to my ...
Note that the excerpts from the NAS report that I refer to were contained in the
message that Seiler and Alvarez were responding to.
==============
On Monday, April 19, 1999 9:18 AM
Fritz A. Seiler [faseiler@nmia.com] and Joe Alvarez wrote:
> Another Comment by Fritz Seiler and Joe Alvarez on the Statement
> "Absence of Proof does not Constitute Proof of Absence."
>
> By quoting that adage with regard to the EMF discussion, which has
> just reappeared like a ritornello, Jim Dukelow - as several times
> before - has managed to touch one of the sore spots in our
> consciousness. So here we go again! (Did you do that on purpose,
> Jim? The mailing list has been rather quiet and peaceful lately!?!?
>
> :-)) :-).
Actually, my reason for re-posting the excerpts from the NAS
evaluation of the EMF evidence was Robert Godfrey's posting, which
included:
> I was curious when I started to see more pieces since last summer
> about the "controversy" over electro-magnetic fields, especially
> cell phones (didn't the original cell phone scare start with a
> guest on the Larry King show in the early 90's)? I seem to recall
> several large NIH/NSF studies along with a meta-analysis of the
> literature, that appeared to show zip.
Like Seiler and Alvarez, I have some "sore spots" and the phrase "that
appeared to show zip" touched one of them. The idea that the NAS
meta-analysis and some other similar efforts have establish
("proved?") that EMF at normal human exposure levels has no human
health effects is being assiduously peddled. I believe the excerpts
from the NAS report suggest that the situation is much more complex.
There are statistically significant associations between EMF exposure
proxies and cancer for which there are no plausible explanations in
what we know of the physics and biology involved.
<snip>
> Basically, the statement "Absence of Proof is not Proof of Absence" is
> a truism, and the problem is largely created by the unscientific
> manner in which it is used. It is usually meant to imply that the
> effect is quite likely to be there but is just too small to be seen.
>
> In the worst case, it is often quoted in support of the linear model,
> and implies that although we cannot see it, it is not only there but
> we also know its behavior as the dose decreases.
<snip>
Here Seiler and Alvarez have introduced one and a half "straw men",
against which they then argue vigorously.
The full Straw Man is "Absence of Proof is not Proof of Absence".
This is a phrase that was introduced by Seiler and Alvarez, one that I
have never used and would not use. It lives in the domain of
mathematics and logic, not science, and doesn't begin to capture the
complexity of what Goedel and Turing have shown we can and can't know
in those domains.
The phrase I have used, "Absence of Evidence is not Evidence of
Absence", is frequently cited in epidemiology as a caution that just
because we haven't been able to demonstrate an effect, that doesn't
necessarily mean it doesn't exist. If we haven't been able (through
epi studies or experiments or whatever) to demonstrate an effect,
there are two possibilities: 1) no effect exists, or 2) we haven't
looked enough or done the right experiment to find the effect that is
there.
In statistical terms, we conduct a test of hypothesis with null
hypothesis H0 = "no effect" and alternate hypothesis H1 = "some
specified effect" and the result of the test is that we cannot reject
H0. If we accept H0, there is a probability beta of a Type 2 error,
i.e., H1 is actually true, but we have accepted H0. beta may be quite
large. The value (1 - beta) is the statistical power of the test of
hypothesis. If beta is close to zero and (1 - beta) is close to one,
then our test has a high probability of finding the effect when it is
there. Typically, the only ways to increase the statistical power are
to base the test of hypothesis on probability distributions that are
precisely tailored to the problem or to increase the sample size.
The question of whether silicon breast implants cause human health
effect X is another issue on which it is frequently claimed that the
scientific evidence is in and proves that there is no human health
effect. Even people who should know better, such as Dr. Marcia Angell,
editor of the New England Journal of Medicine, make this claim
(Angell's arguments can be found in her book, Science on Trial, WW
Norton, 1996 and in a number of NEJM editorials she wrote before her
book). In fact, the evidence they cite is all epidemiological and
consists of epi studies that do not have sufficient statistical power
to detect a "small" effect (most of the papers mote that weakness in
the study they describe). The one study that I am aware of which has
sufficient statistical power to detect a small effect (Hennekens et
al., JAMA, v. 275, 616-621, 1996), did in fact show an approximately
25% increase in connective tissue disease, with high statistical
significance, in a population of 400,000 female health professionals,
part of the population of the long-running "Nurses Study".
> From a statistical point of view, when you do science, you have
> absolutely no freedom in the selection of the null hypothesis. In
> science, there is no judgement call involved, such as those many
> statisticians claim to make in their analyses. Without exception,
> what we think we know already, i.e., the present paradigm, is always
> what defines the basis for the null hypothesis: It is either a new
> model based on known experimental data or it is based on a new
> additional assumption trying to explain the data.
<snip>
There is no basis in statistics for the assertion in the first
sentence above. In fact, when you use statistical tests of hypothesis
to do science (or anything else, for that matter), you have almost
complete freedom to select your null hypothesis, with the following
constraints. First, you want to choose as your Null the thing you
would like to disprove. Second, you need to choose a null hypothesis
that is capable of predicting the outcome of your experiment or the
expected properties of your data set. The predictions are used to
establish the "Accept H0" and "Reject H0" regions.
If you are arguing for a new paradigm, then your null hypothesis is
the old paradigm. If you are defending the old paradigm, then your
Null is the proposed alternative. In either case, the only definitive
result you can get is rejection of the null hypothesis. Accepting the
Null is "confirmatory" and may give you a warm feeling about the Null.
If you have tested the Null 3,650,000 times and accepted it each time
(H0 = the sun will rise tomorrow), you are justified in having a
REALLY WARM feeling about it, but as Seiler and Alvarez note, you
haven't "proven" it. Statistical hypothesis testing and the Popperian
view of the scientific method are both nicely summarized by the catch-
phrase "Nature whispers YES, but shouts NO".
> ... In view of the failure of the linear model to
> predict the low-level Cohen data, we would be ill advised to persist
> in postulating that we know that the effect depends linearly on
> radiation dose in that region. This kind of a statement would not
> only be unscientific but would actually be dishonest because in the
> case of radon and lung cancer we are now aware of something we did not
> know before.
>
> For other relationships, one could at best call that kind of statement
> a rank speculation without any scientific basis. Finally we need to
> be aware that exactly the same argument can be made in the case of the
> EMF data.
<snip>
The snippet above and a lot of other verbiage in the Seiler and
Alvarez posting about the linear no-threshold hypothesis (LNTH) are
the half Straw Man. They seem to imply, without explicitly saying so,
that I am one of those bullies picking on Bernie Cohen and his radon
and lung cancer data and the conclusions he draws from that data.
For the record, I believe that Cohen's data and analysis drive a stake
through the heart of the LNTH, at least for the case of lung cancer
induction by high linear energy transfer ionizing radiation (i.e., the
alpha particles emitted by radon progeny). It might be noted that
Cohen chose as his Null the thing he wanted to disprove, the LNTH.
The arguments against Cohen's analysis have mostly focussed on whether
his ecological study suffers from the "ecological fallacy", but that
is a story for another day.
Best regards.
Jim Dukelow
Pacific Northwest National Laboratory
Richland, WA
jim.dukelow@pnl.gov
These comments are mine and have not been reviewed and/or approved by
my management or by the U.S. Department of Energy.
************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html