[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

It's the design, folks, it's the design



Suppose I make radon measurements in kingdoms around the Mediterranean 
in biblical times, and I correlate them with lung cancer mortality rates 
in modern China (on an alphabetical basis).  Also, I throw in 
correlations with cigarette sales during World War I.  I ignore diet, 
lifestyle, genetic differences, outdoor and indoor air pollution, 
background lung cancer rates, etc.  Suppose that I do (or do not) find 
some correlation.  Would you believe the study showed anything about 
radon, smoking, and lung cancer?  No.  Why?  Because none of those 
measurements (Rn, cigarettes) apply to the individuals who died of lung 
cancer in modern China.  This is one of the flaws in the Sternglass 
studies of fallout and SAT scores; this difficulty is why ecological 
studies are not compelling to me and others.

Cohen's data are a good deal better than the study described above, but 
the fundamental problem with any ecological study is the great leaps of 
faith that are necessary in positing that one set of numbers is related 
to another.  In Cohen's study, most of the cigarette sales data probably 
don't apply to most of the lung cancer death data because different 
people smoked the cigarettes sold in a county than died of lung cancer 
in a county.  Most of the radon measurements probably don't apply to 
most of the lung cancer death data because different people breathed the 
radon in the county than died of lung cancer in the county.  
Furthermore, the diet, lifestyle, genetic background, and other 
confounders are not controlled, and cannot be controlled, because of the 
lack of individual data.  

This non-predictive relationship between various kinds of data isn't a 
"result" worth $5000, it isn't news to those who have taken epidemiology 
101, and it is unlikely to find its way into a peer-reviewed journal, 
since all of the reviewers would yawn and say, "No original content - 
reject." 

This lack of correlation between measurements is true of Wade 
Patterson's ecological studies, as well.  As an example of confounding, 
in the Jablon et al. 1991 nuclear power plant study (which. by the way, 
was done for political reasons), one of the few power plants for which 
the odds ratio was greater than one was the Beaver Valley Plant in 
Western Pennsylvania.  Beaver Valley is in the same town with a chemical 
plant that told EPA it had released something like 115,000 pounds of 
butadiene (a carcinogen for which there are compelling data) the year 
the study data were compiled, and had been in operation for years.  But, 
of course, chemical exposures didn't show up in Jablon et al., because 
they didn't know about it.

At least in the case-control studies one knows that the individual who 
died (case) and the individual who didn't (control) lived, for some 
portion of their lives, in the houses in which the radon measurements 
were made.  And contrary to Cohen's claim in the March Health Physics, 
the miner study measurements were mostly made in the same mines in which 
each miner worked while they were working (with some exceptions), not 
after the miners died.  For both case-control and miner studies, smoking 
data (when available) were for the individuals involved, not for unknown 
persons.

As for the recurrent question of hypothesis testing, I agree that the 
linear, nonthreshold model predicts the excess relative risk lung cancer 
deaths in groups of individuals to be proportional to lifetime exposures 
to radon progeny in those individuals.  If one knew what the 
cradle-to-grave radon progeny exposures were for each individual in each 
group, along with each individual's smoking, diet, lifestyle, genetic 
predisposition, exposures to other air pollutants, and other risk 
factors for lung cancer, one would surely be able to test the 
hypothesis.  That's not what Cohen is doing.  When one measure Bob's 
radon, Joe's smoking, and Sam's lung cancer and tries to correlate them, 
one encounters inferential difficulties.  Some people who smoked 
cigarettes purchased in County X also smoked cigarettes purchased in 
Counties Y, Z, A, B, and C or in the Post Excange or the College 
Bookstore or at the Casino or...  Some people who were exposed to radon 
in County D were also exposed to radon in Counties E, F, G, and H.  Some 
of those people died in County J.  Some people who died of lung cancer 
in County X didn't smoke any of the cigarettes sold in County X and were 
never exposed to radon in County X.  For my money, this isn't a very 
cogent way to test a hypothesis.

I haven't said that ecological studies are `invalid,' as claimed by one 
respondent.  Ecological studies are useful of hypothesis generation.

In my opinion, it's the design, folks, it's the design.  

Reference

Jablon, S.; Boice, J.D.; Hrubec, Z.  Cancer in Populations Living Near 
Nuclear Facilities: A Survey of Mortality Nationwide and Incidence in 
Two States.  Journal of the American Medical Association 
265(11):1403-1408; 1991.

- Dan Strom

The opinions expressed above are my own, and have not been reviewed or 
approved by Battelle, the Pacific Northwest National Laboratory, or the 
U.S. Department of Energy.

Daniel J. Strom, Ph.D., CHP
Staff Scientist
Health Protection Department K3-56
Pacific Northwest National Laboratory
Battelle Boulevard, P.O. Box 999
Richland, WA 99352-0999 USA
(509) 375-2626
(509) 375-2019 fax
mailto:dj_strom@pnl.gov