[ RadSafe ] Comparison of a Measured Result to the Critical/Decision Level
Peter Bossew
Peter.Bossew at reflex.at
Tue Oct 13 13:26:07 CDT 2009
Arvic, and all interested in esoteric statistics:
I think you have put it correctly.
Handling values <LLD (detection limit, y#) is a different, and very
untrivial subject. The choice of setting them y#/2 or the like is a bit
questionable theoretically, even if it seems to perform reasonably in many
cases, in particular if the values can be assumed ca. ~LN.
One possible way: if there are not too many missing values, compute the
median (which is quite robust) and assume LN (or any distribution). Then
use the known (if known...) relation between E (expectation), variance and
med=F(0.5 ) (the 0.5-quantile of the distribution function F). For LN data:
E[Z] = exp E[lnZ] * exp[S^2[Z]/2], V[Z]=E^2[Z] * (exp (S^2[Z]-1) ........
(1)
Z= variable; E = expectation, V = variance, S=sqr(variance of ln[Z]) =
std. dev.[ln Z]
exp E[ln Z] = GM[Z] (geom. mean), exp S[ln Z] = GSD[Z] (geom. std. dev.)
estimate: E[ln Z] = ca. ln (med{z} ); S[Z] = ca. mad {ln z}
.................. (2)
z = data (samples of Z), mad = median absolute deviation, mad[Z] := med
[abs (Z-med[Z] ) ]
practically , calculating med and mad in (2):
for z<LLD set z=0 (or -999 or the like),
mad{ln z} = med { abs ln (z/med{z}) }; for z<LLD set abs ln(z/med {z}) =
very high value, like 999 or 1e+12
ca. the same results are obtained by setting z=LLD/2 for z<LLD, and
proceeding as above.
The resulting SD cannot be used for interval estimates, of course. The
F(p) like the 0.95 quantile must be estimated from the estimated LN.
One condition of such methods is of course that the data are assumed to
belong to one population.
It seems that this method yields a bit more realistic estimates for E and
Var than the distribution fitting method which tends to overestimate these
statistics. I think that it is also not too sensitive against outliers. It
can also be implemented very easily.
One drawback is (like also of the distribution fitting method) that - at
least in this simple implementation - it does not account for the actual
LLD: the information contained in it (by its def. via alpha &
beta-quantiles) is lost.
If interested, for an example see the att. pdf-converted xls. (LN fitting
with statgraphics). The results obtained with the med,mad-method are
highlighted yellow, the original data green (reasonably LN, according to
KS, CvM-W^2 and AD-A^2 tests; the model (1) is quite sensitive against
deviations from LN, in particular the estimated V. I did not investigate
to which degree this is compensated by the robustness of med and mad). The
AM and SD estimated with a LLD/2 (set deliberately) method are blue. As a
refinement, the estimate of S in (2) can be done more sophisticated
(because for Z~N, the abs(Z-med[Z]) are not ~N but something complicated)
- I can send you the original xls (cannot be attached here).
Just a suggestion (not ISO certified ! ;-) )
Peter
ps. I don't know how Excel calculates the med. Theoretically, ln med[Z]
should be equal to med [ln Z]. In xls they are slightly different,
probably due to some interpolation, if dealing with data {z} as samples of
Z.
"Arvic Harms" <Arvic.Harms at npl.co.uk> writes:
>As far as I understand the ISO 11929:2008 draft, it recommends the
>following:
>
>If result < y* (decision threshold), report as 'not detected' or
>alternatively as 'less than y# (detection limit)', if required by a
>regulator.
>If result >= y*, report the best estimate of the result together with its
>ucertainty (even if the result is less than y#, the detection limit).
>
>Example:
>
>Background measurement time: 10000
>Background counts: 100
>Gross measurement time: 10000
>Gross counts: 120
>Net count rate: 0.0020
>Decision threshold y*: 0.0023
>Detection limit y#: 0.0049
>Detected: No
>Report as: Not detected (or less than 0.0049)
>
>Background measurement time: 10000
>Background counts: 100
>Gross measurement time: 10000
>Gross counts: 124
>Net count rate: 0.0024
>Decision threshold y*: 0.0023
>Detection limit y#: 0.0049
>Detected: Yes
>Best estimate: 0.0026
>Best estimate standard uncertainty: 0.0013
>Report as: 0.0026(13) at k=1
>
>Background measurement time: 10000
>Background counts: 100
>Gross measurement time: 10000
>Gross counts: 200
>Net count rate: 0.0100
>Decision threshold y*: 0.0023
>Detection limit y#: 0.0049
>Detected: Yes
>Best estimate: 0.0100
>Best estimate standard uncertainty: 0.0017
>Report as: 0.0100(17) at k=1
>
>Summary
>
>For net count rates
>
>0.0020: Report as not detected or < 0.049
>0.0023: Report as 0.0026(13)
>0.0100: Report as 0.0100(17)
>
>At first sight, this may seem somewhat counterintuitive, but I believe it
>is correct. When you want to combine the results (as a sum or a mean) in
>the case of a count rate of 0.0020, dividing 0.049 by a factor of two
>will get you a reasonable estimate close to the value of 0.020 (although
>this value is 'not detected'). However, it will of course fail for count
>rates of 0.0001 or -0.0020.
>
>Scanning through the literature (including US-EPA document "Guidance for
>Data Quality Assessment" (EPA QA/G-9)) it is clear there is several ways
>of analysing data with non-detects (censored data).
>
>(i) Replace non-detects with DL/2, DL/SQRT(2), DL or (my favorite) "a
>very small number"
>(ii) Trimmed means
>(iii) Proportion tests
>
>I guess the main problem is how the detection limit (DL) is defined
>(Bayesian ISO 11929 y#, LLD, MDA, Currie Ld) and how you report it. The
>suggestion below to publish data in different ways within a report is
>good one.
>
>Kind regards,
>
>Arvic Harms
>
>Dr Arvic Harms
>National Physical Laboratory
>Hampton Road
>Teddington TW11 0LW
>Middlesex
>United Kingdom
>E-mail: arvic.harms at npl.co.uk
>Tel ++44 20 8943 8512
>Fax ++44 20 8614 0488
>
>> -----Original Message-----
>> From: radsafe-bounces at radlab.nl [mailto:radsafe-bounces at radlab.nl]On
>> Behalf Of Brennan, Mike (DOH)
>> Sent: 08 October 2009 16:56
>> To: radsafe at radlab.nl
>> Subject: RE: [ RadSafe ] Comparison of a Measured
>> ResulttotheCritical/Decision Level; new question
>>
>>
>> How to report less than LLD (or MDA, or whatever) is
>> something worthy of discussing within your organization every
>> now and then, to make sure that new people coming in from
>> other places understand what you are doing.
>>
>> We report results in different ways, depending on who the
>> intended audience is. Sometimes we report in different ways
>> within the same document. For example, we have one report
>> that sample for Co60, Cs137, and I131, as well as any other
>> isotopes that produce positive results on a gamma scan (we
>> exclude isotopes in the U238 chain). In the main report,
>> aimed at the general public, we have tables with "Not
>> Detected" for any result where the counted activity was not
>> above the LLD. In the appendixes we have the actual result,
>> so anyone who is using the data for more involved statistical
>> operations will have something to work with.
>>
>> This may not come up too often if you are dealing with "real"
>> radioactivity, but it is pretty common in environmental monitoring.
>>
>> -----Original Message-----
>> From: radsafe-bounces at radlab.nl
>> [mailto:radsafe-bounces at radlab.nl] On Behalf Of blreider at aol.com
>> Sent: Wednesday, October 07, 2009 5:49 PM
>> To: Arvic.Harms at npl.co.uk; radsafe at radlab.nl
>> Cc: BobShannon at earthlink.net
>> Subject: Re: [ RadSafe ] Comparison of a Measured Resultto
>> theCritical/Decision Level; new question
>>
>>
>>
>> Semantics is really messy especially when dealing with
>> statistics. Ditto on Bob Shannon's references and also you
>> may want to look at papers published by Mark A. Tries of
>> University of MA Lowell (sometimes et. al.) who has authored
>> a number of good papers on counting statistics.
>>
>>
>>
>> If you use zero you most likely are adding a bias to your
>> conclusions. This bias may be high or low. ISO 11929 2008
>> and the below references Bob submitted are in agreement that
>> zero is not an appropriate approximation of the value if less
>> than the detection limit. A bias may create problems is
>> conclusions are incorrect as a result of the bias. Unbiased
>> data should be used for all calculations performed to provide
>> a best estimate for reporting based on an acceptable
>> percentage of false + and false - results. Even if reporting
>> a best estimate it is often useful to report or at least
>> maintain a record of the actual measurements and errors on
>> the measurements.
>>
>>
>>
>> I have never seen value/2, perhaps the person who started
>> that was confusing the 95% MDA with the Lc (detection limit)
>> and taking half of the MDA or 1/2 x 4.66sigma.
>>
>>
>>
>> Hope this helps.
>>
>>
>>
>> Barbara Reider, CHP
>>
>> -----Original Message-----
>> From: Arvic Harms <Arvic.Harms at npl.co.uk>
>> To: Bob Shannon <BobShannon at earthlink.net>; radsafe at radlab.nl
>> Cc: Peter Bossew <Peter.Bossew at reflex.at>
>> Sent: Mon, Oct 5, 2009 7:23 am
>> Subject: RE: [ RadSafe ] Comparison of a Measured Resultto
>> the Critical/Decision Level; ne w question
>>
>>
>>
>>
>> Dear all,
>> ISO 11929 2008 draft has the following recommendations in Chapter 6:
>> If result < y* (decision threshold), report as 'not detected'
>> or alternatively s 'less than y# (detection limit)', if
>> required by a regulator.
>> f result >= y*, report the best estimate of the result
>> together with its ncertainty (even if the result is less than
>> y#, the detection limit).
>> I have a question about combining results which contain one
>> or more 'less than #' types of "results" when you want, for
>> instance, to calculate a mean of everal results.
>> It is common to assign a value of [y# divided by factor of 2]
>> to the 'less than #' results. Is there any scientific
>> justification for doing this?
>> The 'less than y#' types of "results" are 'not detected' and
>> are therefore 0 and ot y# / 2 in my opinion.
>> Kind regards,
>> Arvic Harms
>>
>> r Arvic Harms
>> ational Physical Laboratory
>> ampton Road
>> eddington TW11 0LW
>> iddlesex
>> nited Kingdom
>> -mail: arvic.harms at npl.co.uk
>> el ++44 20 8943 8512
>> ax ++44 20 8614 0488
>> > -----Original Message-----
>> From: radsafe-bounces at radlab.nl [mailto:radsafe-bounces at radlab.nl]On
>> Behalf Of Bob Shannon
>> Sent: 04 March 2009 20:38
>> To: radsafe at radlab.nl
>> Cc: 'Peter Bossew'
>> Subject: RE: [ RadSafe ] Comparison of a Measured Resultto
>> the Critical/Decision Level
>>
>>
>> Peter -
>>
>>
>>
>> I very much agree with the main thrust of your comment about
>> critical levels. Thanks!
>>
>>
>>
>> I have some concerns about censoring measurement results as
>> you have proposed, though.
>> Most standards that apply to radiochemical measurements
>> (at least in the US) specify that every measured result,
>> whether positive, negative or zero, should be reported in
>> association with its measurement uncertainty. While there
>> are a few programs that make exceptions, and some entities
>> fail to follow the guidance, but the guidance is presented in
>> rather unambiguous terms. Here are several examples:
>>
>>
>>
>> · Multi-Agency Radiological Laboratory Analytical
>> Protocols Manual
>> (MARLAP) - EPA, NRC, DOE, DOD, DHS, FDA, USGS, NIST
>> (NUREG-1576, EPA 402-B-04-001A, NTIS PB2004-105421).
>>
>> o Section 19.3.8 Reporting the Measurement Uncertainty
>>
>> § It is possible to calculate radioanalytical results that
>> are less than zero, although negative radioactivity is
>> physically impossible. Laboratories sometimes choose not to
>> report negative results or results that are near zero. Such
>> censoring of results is not recommended. All results,
>> whether positive, negative, or zero, should be reported as
>> obtained, together with their uncertainties.
>>
>>
>>
>> · ANSI N13.30 - Performance Criteria for
>> Radiobioassay, Health
>> Physics Society N13.30-1996
>>
>> o 3.5 Reporting Results [results reported shall include]
>>
>> (5) quantification of the amount of radionuclide(s) (whether
>> positive, negative, or zero) of each radionuclide measured
>> in each part of the body counted;
>>
>> (6) estimates of counting uncertainty
>> and the total
>> propagated uncertainty
>> [which includes counting and other random and systematic
>> uncertainties at one sigma (see Appendix D, Section D.6)];
>>
>> (7) value of the decision level and a priori MDA, in units
>> consistent with the results;
>>
>>
>>
>> · ANSI N42.23 American National Standard Measurement
>> and Associated
>> Instrument Quality Assurance for Radioassay Laboratories,
>> (IEEE, 1996/2004)
>>
>> o A.8 Reporting results by the service laboratory
>>
>> § "Calculated concentration or activity value (whether
>> negative, positive, or zero) using the appropriate blank for
>> each nuclide" [and] "Estimates of the counting uncertainty
>> and total propagated uncertainty (which contains counting
>> and other random and systematic uncertainties" [must be
>> included in the analytical results reported by the service
>> laboratory]
>>
>>
>>
>>
>>
>> Bob Shannon
>>
>> Quality Radioanalytical Support, LLC
>>
>> BobShannon at earthlink.net
>>
>> Tel: 303-432-1137
>>
>>
>>
>> -----Original Message-----
>> From: radsafe-bounces at radlab.nl
>> [mailto:radsafe-bounces at radlab.nl] On Behalf Of Peter Bossew
>> Sent: Wednesday, March 04, 2009 7:44 AM
>> To: Redmond, Randy (RXQ); <radsafe at radlab.nl>
>> Subject: Re: [ RadSafe ] Comparison of a Measured Result to
>> the Critical/Decision Level
>>
>>
>>
>> Randy,
>>
>>
>>
>> the "error" (more accurately: uncertainty) is irrelevant for
>> this. The
>>
>> "result" (estimate of expectation of a rnd. variable) has to
>> be compared
>>
>>
>> to the decision level or threshold. If, like in your case,
>> result < Lc, it
>>
>> has to be reported as (quantity) < MDA (also called LLD).
>> Also the alpha
>>
>> and beta values connected to Lc and MDA should be reported.
>>
>> Only if the "result" > Lc, it must be reported together with
>> uncertainty
>>
>> (incl. k=number of sigmas), or ideally, with a confidence
>> interval (again
>>
>> with k) (because the distribution is not symmetrical, which
>> is relevant
>>
>> for low level measurements. This can only be ignored for
>> high enough count
>>
>> numbers).
>>
>>
>>
>> The relevant document is ISO 11929: Determination of the
>> detection limit
>>
>> and decision threshold for ionizing radiation measurements. Geneva
>>
>> 2000-2001 (8 parts).
>>
>> For a good review of theory, De Geer L. (2005): A decent
>> Currie at the
>>
>> PTS. Report CTBT/PTS/TP/2005-1, Aug. 2005; available from
>> the CTBTO. Also:
>>
>> De Geer L. (2004): Currie detection limits in gamma-ray spectroscopy.
>>
>> Appl. Rad Isot. 61 (2-3), 151-160.
>>
>> In Bayesian reasoning:
>>
>> - Weise K. and W. Wöger (1993): A Bayesian theory of measurement
>>
>> uncertainty. Meas. Sci. Techn. 4(1), 1-11;
>>
>> - Weise K. et al. (2006): Bayesian decision threshold,
>> detection limit and
>>
>> confidence limizs in ionising-radioation measurement. Rad. Prot. Dos.
>>
>> 121(1), 52-63;
>>
>> - Michel R. (2000): Quality assurance of nuclear analytical
>> techniques
>>
>> based on Bayesian characteristic limits. J.
>> Radioanalytical
>> Nucl. Chem.
>>
>> 245(1), 137-144.
>>
>> For non-Currie decision rules: Strom and MacLellan (2001):
>> Evaluation of
>>
>> eight decision rules for low-level radioactivity counting.
>> Health Physics
>>
>> 81 (1), 27-34. The authors show that the standard rules (ISO
>> 11929) may
>>
>> not perform well in extreme cases.
>>
>>
>>
>>
>>
>> Peter
>>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: LLD.pdf
Type: application/pdf
Size: 8243 bytes
Desc: not available
URL: <http://health.phys.iit.edu/pipermail/radsafe/attachments/20091013/372469e0/attachment-0001.pdf>
More information about the RadSafe
mailing list