[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Linear, no-threshold <hwade@aol.com>



Reply to Wade Patterson

First, I don't read Lubin et al. (Health Phys. 69(4):494-500; 
1995) the way you do.  Lubin et al. essentially say that the 
inverse dose-rate effect is clear in the 11 uranium miner 
cohorts, and that it diminishes with decreasing total WLM 
exposure, supporting Brenner's suggestion (Health Phys. 
67(1):76-79; 1994).  Thus, extrapolation from risks inferred from 
high doses in miners to low lifetime doses (which can only be 
received at "low" dose rates found in homes) will probably not 
*underestimate* risks.  Lubin et al. state, "assessment of risks 
of radon progeny exposure in homes... should not assume an 
ever-increasing risk per unit dose" (i.e., risk per WLM doesn't 
go *up* at low exposures).  This doesn't refute LNT at all, it 
merely states that LNT won't underestimate risks as had been 
suggested on the basis of considering the inverse dose-rate 
effect alone.

Secondly, Cohen's study (Health Phys. 68(2):157-174; 1995) is of 
a design that most risk assessors don't take very seriously: the 
ecologic design.  For you to claim that anything is proved by a 
study of that design, even "corrected" till the cows come home, 
is a fairly strong statement.  There are lots of authors who just 
aren't convinced.  

Cohen's design violates one of AB Hill's criteria (see below) for 
inference of causation from statistical association, that 
exposure must precede disease.  Even the judge in the TMI 
lawsuits refused to hold TMI responsible for cancers that were 
diagnosed prior to the accident.  Cohen's radon data were 
collected in the mid 1980s.  His lung cancer rates are for the 
period 1950 to 1979.  Given the minimum latent period of 10 or 20 
years for lung cancer in human beings, perhaps even longer, Cohen 
should have been measuring radon in the period 1910 to, say, 1969 
for comparison with these rates.  If anything, Cohen's radon 
measurements should be compared to lung cancer mortality rates in 
the years 1995 (10 year latency) to 2020 (35 year latency).  For 
the most of the exposures in uranium miner studies (and for no 
other radon studies that I know of), the measurements were made 
at the same time as the exposures.  This is one of many factors 
that dilute the credibility of the study.

Here are some of the major factors to consider before inferring 
that a statistical association is a causal one (adapted from 
Austin Bradford Hill, "The Environment and Disease:  Association 
or Causation?"  Proc. Roy. Soc. Med. 58:295-300, 1965):

1.   Strength:  a large effect, e.g., 32-fold lung CA increase in 
heavy smokers.

2.   Consistency:  is effect consistently observed across 
studies?

3.   Specificity:  specific workers, particular sites and types 
of disease.

4.   Temporality:  exposure must precede disease.

5.   Biological gradient:  dose-response curve.

6.   Plausibility:  biological plausibility depends to some 
extent on how much biology one knows.

7.   Coherence:  cause and effect inference should not seriously 
conflict with generally known facts of the natural history and 
biology of the disease.

8.   Experiment:  does intervention reduce or prevent?

9.   Analogy:  do other, similar agents produce the effects?

BOTTOM LINE:  STRONG STATISTICAL ASSOCIATION ALONE DOES NOT PROVE 
CAUSATION.

My former colleague Dwight Underhill offers the following 
humorous example of causal inference:  "In the winter I wear 
galoshes.  In the winter I get colds.  Therefore, galoshes cause 
colds."  This, too, may be a 20-standard deviation effect, but it 
doesn't prove that galoshes cause colds.  And it will take more 
than an ecological study to convince some of us that radon 
exposures protect against lung cancer.

- Dan Strom <dj_strom@pnl.gov>