[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Ecologic Studies



Dear Bernie,

        I am sorry to be so late in answering your questions.  I was quite busy,
and a decent answer to your questions requires both a lot of time and quite
some space. So here it goes:
        It seems that we are caught in a big misunderstanding. You are working
successfully at deriving an exposure-effect relationship for non-smokers; we,
on the other hand, were just coming out of investigations involving the basic
question: "What do we really need to know in order to be able to make a
risk projection for a prediction population?" and so we pursued that line of
questions a bit further.
        We should take a moment here to think about the different implications
involved in the two approaches.  Seeking an exposure-effect relationship is
making a beginning in 'medias res' (or starting in the middle of the problem),
the decision to use an explicit cause-effect relationship having already been
made. We started more near the beginning, asking: "What data do we really
need, and what do we do with them when we get them?"  That means that if
you find that you don't really need an explicit exposure-effect relationship,
then you just won't use it.
        This is our approach, because we find that the implied causation has
been proven already, as far as that is possible in this case.  Basically, what
to do is a question of error propagation: If you go to some model or other
by a maximum likelyhood fit, you make random and systematic errors and
then you propagate them together with those of the AEP (Assumption of
Equivalent Populations) into the total uncertainty of the risk prediction. But
if you go directly from the data to the risk prediction, you will have the same
systematic error contribution from making the AEP, but have much smaller
random errors and smaller systematic contributions from the direct transfer
of information to the total error of  the risk prediction. So in this respect the
choice of the direct data transfer is quite obvious.
        Using this method, Joe and I had obviously no comments to make in
our paper about your r-S correlation . We both have read your statements
on this issue and agree with your treatment of the data.  We also agree that
an intuitive judgment about the r-S correlation cannot be made, and believe
that the confounding factors are not large but as small as you claim.  In this
respect we have no problem at all.
        Actually, what we are saying is a consequence of an old discussion we
once had with regard to micro- and other forms of dosimetry: Assume that
you have a set of risk data points (plotted in the vertical) as a function of
different exposures. You now use these different quantities for the abscissa,
such as exposures (such as ct- products and other measures) and different
kinds of doses, and then plot the data as cause-effect functions.  You get a
series of plots with the same risk points moving horizontally for different
types of doses.  Now making a number of interpolation fits for the different
plots will yield different functions for plots with different exposure quantities.
        If we now look at the projected risks for a particular set of exposures,
they will yield approximately the same risk values as long as the same exposure
or dose quantity and its corresponding interpolation function are used.  Any risk
differences are caused only by the different interpolation functions used.  The
predicted risks are, therefore, independent of the dosimetry used, at least in
first order.
        In our paper, Joe and I make the same kind of evaluation, using a simple
formalism appropriate for the job. I will not repeat that here.  We then show
that if you make a correction for smoking and its correlation, you may indeed
get a better dose-effect relationship for non-smokers; but, if you try to use this
result for a risk prediction involving another population that also contains some
smokers, you have to put an appropriate correction right back into the risk
calculation. For the same fraction of smokers, the influence of the corrections
will tend to cancel, and if you can justify making the Assumption of Equivalent
Populations (AEP), they cancel exactly.
        Making the AEP always involves incurring a systematic error. The question,
as we have pointed out before, is whether we can live with that size of uncertainty.
Unfortunately, this discussion is usually not done with sufficient care, nor is an
attempt made to estimate the systematic error involved.
        This AEP is the basis for our confidence in our claims: Whatever influence
a certain fraction of the population (like smokers) exerts on the mean response
of the test population, it will cancel exactly as long as you can make the AEP for
the test and the prediction population.  That is why we claim that any evidence
for correlations and their size are immaterial for a risk assessment.  However, it
is clear that making this correction will be absolutely necessary if you decide that
you want to determine a dose-effect relationship for non-smokers.
        In recent postings, some people have made some rather optimistic claims
about being able to correct for particular effects instead of making the AEP. I
would like to point out that, as an example, the BEIR committees talk quite a lot
about some special effects in the risks of different cancers, but when all is said
and done, they go and quietly make use of the AEP for all cancers.  So I am
fond of wishing all those who attempt to correct the AEP a lot of luck!
        It seems to be hard for some people to accept a specific expression such
as the AEP for something we have always done, ever since we started making
risk assessments.  I really fail to understand why all of a sudden we now have to
question assumptions that were always made as a matter of course just because
we now refer to these assumptions globally as the AEP.
        Let me be clear here, I do want to see these questions raised and have them
discussed in detail, and if at all possible adorned with a numerical estimate. What
I cannot accept is the negative use of these questions in order to 'verbally' show
that something cannot be the way it is, but without giving any numbers.  Such
arguments are not scientifically valid, and merely take up time and space.
        As a good example to the contrary, the exposures of uranium miners and of
the general population to radon and its daughters which were being compared
carefully and in great detail by the BEIR IV and  VI committees. Unfortunately,
we also have to note that in the end, and without further comment, the committees
did use the AEP for their final results.
        Well, in summary, I think that we do not have a disagreement at all, Bernie,
we are just pursuing different goals in our work.

Best regards,

Fritz


Bernard L Cohen wrote:

>         It seems to me that you are glossing over real understandable
> problems with a formalism that does little for me to contribute to
> understanding. Why not face each problem head on and avoid the extra
> formalism?
>         It is not obvious to me that I can make judgements intuitively on
> issues like the one I raised. Even if you and I could agree on these
> judgements, that doesn't mean that everyone else will. And I don't even
> trust my own intuition. It was not obvious to me that a strong negative
> correlation between smoking prevalence, S, and radon levels, r, could not
> explain our data. I actually put a very substantial effort into proving
> this, including developing a mathematical technique for calculating the
> relationship between lung cancer rates and r as a function of the r-S
> correlation, before I came to believe in my results.
>         Can you explain to me why you consider it to be self evident from
> the start that such a strong r-S  correlation could not explain our
> data? Your use of FPC doesn't help me here, so please try to give me a
> more direct explanation. In future discussions, please stick to the
> example of a possible strong r-S correlation until I understand your reasoning.

--

***************************

Fritz A. Seiler, Ph.D.
Sigma Five Consulting
P.O. Box 1709
Los Lunas, NM 87031, USA
Tel.    505-866-5193
Fax.    505-866-5197
e-mail: faseiler@nmia.com

***************************


************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html