[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Ecologic versus Case-Control Studies, r
Hi Bill
> Bill Field's (BF) responses to Jim Muckerheide's (JM) previous comments ---
>
> JM states -- A good case-control study, with small numbers, can have better
> >statistics than a "small" ecological study (in my ref to Cohen, his "small"
> >subset studies may be a million people of the 200 million total
> population, 20 million in a group in the whole study?).
>
> BF RESPONSE - I still do follow Jim's thinking that "better statistics"
> guarantees a more powerful study.
I didn't say "it guarantees...", if fact I said it is not related to the issue
of the power of the study, only that, absent adequate dose statistics (not
just "better" statistics), you start with a potentially fatal handicap in a
case-control study.
Of course, sufficiently large case-control studies are able, as an ecologic
study, to produce well-defined dose data on the basis of sufficiently large
samples consistent with the fundamental properties of the
arithmetic/statistics. (Also, as you are doing, and as I said in the first
instance, that there can be better data from specific residential conditions
that are not as likely to have wide variations, eg, such as the conditions of
the women with lung cancer in Shenyang China (Blot 1990); and you are
attempting another solution to the fatal "you don't know the dose" problem.
BUT, this is NOT the condition of the current reported studies that are
claimed to be "better" than Cohen's study in establishing a relationship
between dose and consequences. Those generally fail.
>The POWER of the study is dependent upon
> the BOTH the sample size and the Quality of the data.
More than that of course, especially dose difference, and confounding factors.
>I fail to see how
> you have better data in an ecologic study when all you have is summary data
> collected or downloaded from some spreadsheet found at the Census Bureau.
This isn't a scientific statement. Many studies have applied good statistics
of large datasets to reflect reality. Pejorative remarks like "all you
have..." is the same as saying "the laws of statistics don't apply". Why not?
> Performing advanced statistics on summary data (county radon
> concentrations, tobacco sales taxes) does not make a study more valid.
Who's "performing advanced statistics"? This is Statistics 101. Simply, what
does the large dataset tell you, with measures of error and its physical
meaning. You can't dismiss this as a matter of religion.
>The statistical manipulation may be advanced, but that does not mean the
> results are valid. In ecologic studies, you do not know the smoking
> history or the average radon concentration for even 1 of those 2 million
> individuals. You call this good data?
It is. But only if you use the math rather than unrelated characterizations.
It's fascinating that thousands of ecologic studies are performed and relied
by the health sciences (and expecially EPA) when they want to make a case,
even extremely poor ecologic studies like those for second hand smoke or
fine-particulates, and many ecological studies have been confirmed in the
physical reality, yet a really strong ecologic study based a firm, proven,
mathematically and scientifically credible use of hard data is dismissed (ony
because it has the wrong answer, otherwise there would at least be open
inquiry, but not against the vested interests).
> Nonetheless, I am glad you agree that a case control study has more a
> priori validity than a small ecologic study. I think I also take it that
> you think any study smaller than 200 million people is small.
No. Where do you get these "conclusions"? :-)
>Given that, small ecologic radon studies are
> all studies other than Dr. Cohen's. So what I think you are doing is
> comparing the validity of case-control studies versus only Dr. Cohens Study.
No. First, see eg Sanquist et al as a 'weak' but nonetheless informative use
of EPA radon data vs lung cancer statistics just by EPA region. Of course each
small independent subset of Cohen's data can be a "small" study. The French
study (blank on ref - let me know if you don't have it) showed both increases
in particulates/health effects and decreases in radon/lung cancer. And of
course the biology is confirming the dose-responses, contrary to the blatant
fallacious statements of BEIR that "we know" that a single alpha can cause
cancer, when we've known the complete opposite since the original Oak Ridge
studies in the 1940s!
> JM states --- But, radon case-control studies can't know the dose. Radon
> studies measure the house. Two "equivalent" people (women) in the same
> house will _likely_ have a significant actual difference in lung dose; 2
> people in different houses with
> >the same measurement are likely to have an even larger variation in actual
> >lung dose.
>
> BF RESPONSE -- I think we have 3 items to consider when calculating an
> "exposure" variable.
>
> Some case control studies use only radon concentrations, not exposure. A
> few case-control studies look at occupancy rates and concentration to get
> radon exposure. Only one case control study I know of actually calculates
> dose to an individual and that is the one we are performing in Iowa. To
> calculate dose, you need to know the activity size distribution of the
> radon progeny in the home. We are presently looking at how well the radon
> concentration of the home approximates the dose to the individual.
>
> I again refer you to our the methods paper for the Iowa Study (Journal of
> Exposure Analysis and Environmental Epidemiology 6:181-195, 1996), don't
> stereotype all case-control studies as being the same.
As above, and previously, fixing the "don't know dose" problem is essential.
Your argument and attempt to do that is both laudable and confirms my argument
that the problem in current case-control studies is large, and likely fatal in
virtually all such studies.
> In ecologic studies, you have county mean radon concentrations. Not
> exposure. Not dose. There are huge variations in radon concentrations
> within counties. When you collect more and more information from counties,
> you don't get linear relationships between radon concentrations and dose to
> the county population.
This is where statistics 101 comes in. There is real data in treating adequate
large populations with real math. Testing the introduction of confounding
factors satisfies virtually all potential weaknesses. The actual results of
course can then be tested against multiple studies and against the magnitude
of a confounding factor that would invalidate the relationships. (Of course
there is a single "confounding factor" that is both large enough, and inverse
to the radon concentration to explain Bernie's and other results - that is the
demonstrated stimulatory effect to the lung tissues provided by increasing low
levels of radon alphas to the lungs, in which the immune response suppresses
cancer formation as both DNA and cellular repair mechanisms are enhanced :-)
>Biases are introduced that can not even be
> measured, as the sample size gets larger these bias become magnified. The
> bias do not average out as you suggest, they get magnified.
Huh? Not in the data and applying statistics as we know it. Where does this
come from? and more importantly, why is it completely consistent in the
multiple studies that are sufficiently large to produce valid results?
> JM states --- As you think about it, the "dose" in the case-control study
> is now not a known quantity, but is a statistical mean or average, just as
> with the ecological study.
>
> BF RESPONSE -- In the case control study in Iowa, we calculate a dose for
> EACH individual in the study. We know their home radon concentrations, we
> know their historic home radon concentrations, we know their occupancy,
> etc. In ecologic studies, you do not have that information for even 1
> individual within the county. Dr. Cohen understands this. In a way, that
> is why he uses the collective dose concept and the LNT to derive individual
> dose from aggregate data. Unfortuantely, others put a spin on Dr. Cohen's
> findings that go beyond, I think, what he is even comfortable with.
As Bernie said, he does not go beyond the "test of the linear model" because
that is the crux of the argument that absolutely disproves the LNT (for radon
and lung cancer), along with the fact that even to introduce the "ecological
fallacy" is to accept that there is a threshold, and that such a threshold
must be above the level at which Bernie's data is statistically below the dose
rate at the lowest dose (<0.5 pCi/l?), which is not exceeded at 8 pCi/l. The
fact that he decides to leave the further analysis of his data to others is
just a personal preference, not a statement that the data can't be applied
beyond where he wants to go. And, with the specific data and definitive
errors, and the confirmation of essentially equivalent results with multiple
independent studies, the relationship is unambiguous, though not sufficient to
be definitive about the fixed parameters of a final model. This exercise could
be done with more analysis of the data sets, and preferably with other
populations, though specific cancers vary geographically and racially (more
geographically).
> JM States --- In Bernie's many independent studies of various, independent
> >subsets, because the dose and cancer data statistics are so much more
> >powerful, the variations in results are small. Not only that, the value of
> any confounding factor that could substantially effect the basic results become
> >very large.
>
> BF Response ---- The ecologic study is not more powerful because of the non
> quantifiable biases within the data set, you have no idea what types of
> cross-level biases have been introduced.
You're right. I have "no idea what types of cross-level biases have been
introduced" but then there don't seem to be any. What "types" do you know
about? and of what significance (real or potential) to the result? (And why
does it appear in every independent data set and study?) Using the words
doesn't create it? like Bernie says, identify the specific potential bias, and
analyze it for significance. If not, it doesn't exist, and the straightforward
results of fundamental arithmetic stand. But multiple studies with consistent
results disprove the existence of such potential unidentifiable biases,
(except those in the funding agencies' minds :-).
> Again, what Dr. Cohen has is a large data set, this does not make a
> study more powerful.
Only if you ignore the data and the meaning of statistics; and the
confirmatory aspects of the many studies with consistent results.
> JM states ----- After considering many confounding factors, Bernie is quite
> justified in looking at the other side of the issue and concluding, by the
direct > >arithmetic of Statistics 101, that a confounding factor that could
change the > >slope would have to be larger than the effect of smoking (and not
seen in any > >study to date), but even more significantly would also have to be
in a direct > >negative correlation with radon concentration levels (and then
wouldn't reach > >the BEIR slope unless it was MUCH larger than smoking :-)
> >
>
> BF RESPONSE - It is very difficult to control confounding on the ecologic
> or aggregate level. Sometimes things that may not even be (or appear to
> be) confounders on the individual level become confounders on the aggregate
> level. It also does not make a study more valid by increasing the number
> of adjustments you do. A study with 1000 attempted adjustments has no more
> validity than a study with 100 adjustments. If you actually knew what the
> confounder is, only 1 adjustment would be needed. In ecologic studies the
> confounders are generally non-linear and not apparent, which makes
> adjustments almost impossible.
Not relevant to the arithmetic of whether, and the magnitude of, ANY
confounder (or combination of confounders) needed to significantly affect the
mathematical result.
> JM states (At the same time, there is biological evidence that the
> association is being confirmed by: animal biology data, and cellular and
> molecular biology, that
> >confirms the "stimulating" effect of moderate levels of alpha exposure to the
> >lung, (and the whole organism, with early studies in mice and rats that
> >uranium dusts and other alpha-contaminants resulted in long lives and
> improved physiological factors, see eg, Henry, JAMA, 1961 review article
> and summary of an Oak Ridge report.)
> >
>
> BF Response -- Jim, we may be in agreement on something today. Data is
> available on both sides of the table on this issue. I think the work of
> scientists like Dr. Raabe is extremely important and should be considered
> in the whole picture.
>
> I have no qualms with where you are headed, I just think there may be more
> valid ways to get there than depending on ecologic studies. I think the
> use of ecologic studies may be credible within a small segment of the
> Health Physics Society, but I think the following decreases outside that
> area. I think if you (and your special interest organization) want to make
> a more credible case for your theories, animal and cellular models may be
> the area to focus.
It's the opposite. Ecologic studies are more credible outside the HPS since
many areas use them, and when they apply to areas of concern beyond the narrow
insider ballpark of rad health effects people understand them well. Reminds me
of Eve Curie's 1939 book on her mother that commented that the average man on
the street could understand that she had found radioactivity, but the vested
interests of the "science establishment" (esp chemistry) would not accept it
until it had been completely isolated.
> Two last thoughts on ecologic studies.
>
> 1) Case-control studies follow-up the findings of ecologic studies. I have
> never seen it go the other way. There must be a reason for this.
> Case-control studies are analytical in nature, while ecologic studies are
> hypotheses generating.
But case-control in the case of finding that selenium is toxic at levels found
in some locations in nature was applied in the lab. Where is the substantial
science that has been suppressed to really examine the issue where it could be
proven? A few independent studies while the rad protection interests
discourage funding, or cut funding to programs before they reach strong
statistically significant proof.
> 2) Dr. Cohen's Study is not the only large ecologic study that has found
> paradoxical findings. Absurd paradoxical findings have been found in other
> ecologic studies surveying numerous countries. For example, a huge
> ecologic study in Europe has found that high blood pressure protects you
> from having a stroke. Obviously this is a ludicrous finding. But, that is
> what they found using a large ecologic study. The European study used
> "powerful and advanced statistics" , but still found a paradoxical rather
> non believable finding.
Don't know the study, but Cohen's study has been in continuous development and
response to all identified "weaknesses" and confounding effects for a decade.
> Dr. Cohen has been unable to explain how his ecologic study can produce
> false results. He asks others to explain his findings. Dr. Cohen has
> stated he can explain the false results of any other ecologic study (other
> than his own). He has frequently offered in the past to show how other
> ecologic studies can produce false results.
This isn't Cohen's position as I've interpreted it. He has applied all factors
to his results that he has been able to contemplate as an explanation of hos
his study can produce false results, plus has responded to all, including many
ludicrous, "suggestions" to explain why his results could be false. On other
studies he has essentially said that he could apply the same exhaustive
methods to explain why they were false if they were false. He only points out
that no one of his critics, who have produced rationalizations rather than
rationale, has been able to do the same: to show how his study has produced
false results.
>I challenge Dr. Muckerheide or
> Dr. Cohen to explain the paradoxical findings published in the study by
> Menotti et al., European Journal of Epidemiology 13: 379-386, 1997.
??
> I apologize for the long monologue, but I think this is an important issue.
> In order not to clutter the listserv, I would be glad to continue the
> dialogue via email off the listserv. Several papers on this issue are
> coming up in the Health Physics Journal in the coming months. I urge all
> radsafers to read those papers with an open mind.
>
> I do not see this whole issue on the validity of ecologic study as a debate
> between Epidemiologist and Health Physicist. While I have "recent"
> training in Epidemiology, my background is in Health Physics. I have
> performed laboratory surveys; I have overseen bioassay programs; and I
> have performed environmental measurements. In fact, the majority of my
> published papers are non epidemiologic in nature. I think we have to move
> beyond this Epidemiology versus Health Physics and focus on the science.
> Let's try to look at the best methodologies to answer the questions at
> hand. As coordinator of the Iowa Radon Lung Cancer Study, my goal is to
> use the best dose assessment methods possible to examine the relationship
> (positive or negative) between residential radon exposure and lung cancer.
>
> Best regards, Bill Field
> bill-field@uiowa.edu
I applaud your attempt to solve the fatal "know the dose" problem in your
study. It would be good if others even acknowledged that it is a problem with
the existing radon case-control studies.
Regards, Jim Muckerheide (Mr.)
muckerheide@mediaone.net
Radiation, Science, and Health