[ RadSafe ] Clarification : Validity / Usefulness . . .
Brennan, Mike (DOH)
Mike.Brennan at DOH.WA.GOV
Wed May 2 13:06:30 CDT 2007
If the MDA was low in comparison to the results (say 10), a would be a
"non-detect", and the others would be detects. For statistical
analysis, such as determining an average (which likely is of limited
use, but that doesn't mean it won't be asked for) I would use all
If the MDA is comparable to the results (say 300), a and c would be
non-detects, b and d would be detects. Again, I would use them for
statistical analysis, but I would try to limit the numbers I used in
reporting the result of my analysis.
If the MDA was high (say 1000) none of the results are detects, and I
would say that when reporting, rather than giving numbers (if it were up
to me). I would keep the numbers for use in a data set for doing
Using numbers that you have little confidence in in your statistical
analysis was something I had problems with when I started in
environmental radiation, but I am persuaded that it is the best way to
do it. If you have a large data set, with some of the results detects
and some not, you need a way of doing things like averages (and more
complex operations). If you say, "We will only include the detects"
your analysis will be biased high. If you say, "We will use the MDA for
any non-detects", again your analysis will be biased high. If you say,
"We will use '<MDA' for any non-detects", you will find that your
spreadsheet is full of ERROR! Messages. If you say, "We will use '0'
for any non-detects", your analysis will be biased low (assuming there
is some background level of the isotope of concern). In the end, using
the numbers generated by your lab equipment likely gives you the best
results, but it is important that you understand how you got there and
what the limitations are.
As I said in my off-board email, for something like H-3, where there is
a limit set by some authority (20,000 pCi/l from the Safe Drinking Water
act, for example), I would argue that rather using numbers based on that
limit is the best strategy. I would argue for 10% of the limit, or a 1%
increase between samples that are detects (2,000 and 200). If your
equipment can't give you uncertainties and MDAs that can easily support
those numbers, there are several venders out there who would be happy to
From: radsafe-bounces at radlab.nl [mailto:radsafe-bounces at radlab.nl] On
Behalf Of Pete_Bailey at fpl.com
Sent: Wednesday, May 02, 2007 4:23 AM
To: radsafe at radlab.nl
Subject: [ RadSafe ] Clarification : Validity / Usefulness . . .
Thanks to all that have responded....
I'm not after guidance on sampling frequency or testing if two positive
measurements are different . . .
What I'm after (and many spoke around ,but didn't directly answer):
In your world i.e., by your procedures
would you consider any one of the following different pairs of R +-r to
be 'not useable' for comparison to whatever ( a limit, a prior sample,
a. 110 +- 140
b. 480 +- 140
c. 270 +- 140
d. 630 +- 140
If your procedure would have you consider a pair as 'not useable', what
is the basis of your procedure's decision process ?
As far as 'difference' of two positive results, I am very much aware of,
familiar with, the "z score" , "zeta test"; it is standard stats.
BUT "by your procedures", what do you do when one of the two is "less
than (a number) " and you don't have a standard deviation of the LLD
and the other is R +-r ( with R not being far from a typical LLD
- do you use the "z score test" & assume r to be applicable to the
LLD value ?
Hopefully, the above clarifies what I was attempting to ask
You are currently subscribed to the RadSafe mailing list
Before posting a message to RadSafe be sure to have read and understood
the RadSafe rules. These can be found at:
For information on how to subscribe or unsubscribe and other settings
More information about the RadSafe