[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Linear, no-threshold <hwade@aol.com>
My paper did not claim or even suggest that radon is protective
against lung cancer. It clearly states that an ecological study cannot
determine causation, and no claims were made about whether radon causes
or does not cause lung cancer. The purpose of my paper was to test the
linear-no threshold theory, and I have clearly shown that an ecological
study like mine can do that. Unless someone can point out something wrong
with my paper, he would have to conclude that the theory fails badly in
the exposure region below 4 pCi/L by grossly overestimating the cancer risk.
One thing I like about Strom's note is that he finally comes out
with a specific objection to my paper, giving me a chance to respond. He
raises the issue that the measurements were made after the cancer deaths
occurred. This, of course, is a real problem with the case-control
studies that Strom thinks so highly of. But it is much less of a problem
with my studies, which require only that the average radon levels in a
county measured in different time periods are strongly correlated. Since
radon levels are determined by geology and house construction
characteristics, this is certainly a highly plausible assumption,
especially when applied to a very large number of counties from all over
the US; this de-emphasizes problems with unusual circumstances. Also we
have done extensive studies of variation of radon levels with age of the
house.
Bernard L. Cohen
Physics Dept.
University of Pittsburgh
Pittsburgh, PA 15260
Tel: (412)624-9245
Fax: (412)624-9163
e-mail: blc+@pitt.edu
On Thu, 11 Jan 1996 dj_strom@ccmail.pnl.gov wrote:
> Reply to Wade Patterson
>
> First, I don't read Lubin et al. (Health Phys. 69(4):494-500;
> 1995) the way you do. Lubin et al. essentially say that the
> inverse dose-rate effect is clear in the 11 uranium miner
> cohorts, and that it diminishes with decreasing total WLM
> exposure, supporting Brenner's suggestion (Health Phys.
> 67(1):76-79; 1994). Thus, extrapolation from risks inferred from
> high doses in miners to low lifetime doses (which can only be
> received at "low" dose rates found in homes) will probably not
> *underestimate* risks. Lubin et al. state, "assessment of risks
> of radon progeny exposure in homes... should not assume an
> ever-increasing risk per unit dose" (i.e., risk per WLM doesn't
> go *up* at low exposures). This doesn't refute LNT at all, it
> merely states that LNT won't underestimate risks as had been
> suggested on the basis of considering the inverse dose-rate
> effect alone.
>
> Secondly, Cohen's study (Health Phys. 68(2):157-174; 1995) is of
> a design that most risk assessors don't take very seriously: the
> ecologic design. For you to claim that anything is proved by a
> study of that design, even "corrected" till the cows come home,
> is a fairly strong statement. There are lots of authors who just
> aren't convinced.
>
> Cohen's design violates one of AB Hill's criteria (see below) for
> inference of causation from statistical association, that
> exposure must precede disease. Even the judge in the TMI
> lawsuits refused to hold TMI responsible for cancers that were
> diagnosed prior to the accident. Cohen's radon data were
> collected in the mid 1980s. His lung cancer rates are for the
> period 1950 to 1979. Given the minimum latent period of 10 or 20
> years for lung cancer in human beings, perhaps even longer, Cohen
> should have been measuring radon in the period 1910 to, say, 1969
> for comparison with these rates. If anything, Cohen's radon
> measurements should be compared to lung cancer mortality rates in
> the years 1995 (10 year latency) to 2020 (35 year latency). For
> the most of the exposures in uranium miner studies (and for no
> other radon studies that I know of), the measurements were made
> at the same time as the exposures. This is one of many factors
> that dilute the credibility of the study.
>
> Here are some of the major factors to consider before inferring
> that a statistical association is a causal one (adapted from
> Austin Bradford Hill, "The Environment and Disease: Association
> or Causation?" Proc. Roy. Soc. Med. 58:295-300, 1965):
>
> 1. Strength: a large effect, e.g., 32-fold lung CA increase in
> heavy smokers.
>
> 2. Consistency: is effect consistently observed across
> studies?
>
> 3. Specificity: specific workers, particular sites and types
> of disease.
>
> 4. Temporality: exposure must precede disease.
>
> 5. Biological gradient: dose-response curve.
>
> 6. Plausibility: biological plausibility depends to some
> extent on how much biology one knows.
>
> 7. Coherence: cause and effect inference should not seriously
> conflict with generally known facts of the natural history and
> biology of the disease.
>
> 8. Experiment: does intervention reduce or prevent?
>
> 9. Analogy: do other, similar agents produce the effects?
>
> BOTTOM LINE: STRONG STATISTICAL ASSOCIATION ALONE DOES NOT PROVE
> CAUSATION.
>
> My former colleague Dwight Underhill offers the following
> humorous example of causal inference: "In the winter I wear
> galoshes. In the winter I get colds. Therefore, galoshes cause
> colds." This, too, may be a 20-standard deviation effect, but it
> doesn't prove that galoshes cause colds. And it will take more
> than an ecological study to convince some of us that radon
> exposures protect against lung cancer.
>
> - Dan Strom <dj_strom@pnl.gov>