[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Asking for opinions -Reply
At 01:39 PM 9/16/1999 -0500, you wrote:
>If the different
>dosimeters used to determine dose control and dose correlation are
>not properly calibrated (independently of each other), there is a
>problem. In the end, a sound technical basis, good procedures and
>good quality assurance and quality control is essential. In
>conclusion, a sound Quality System.
Let me offer an idea where the systems are effective even if one of them
doesn't meet anyone's definition for accurate calibration.
Let's say that my TLD system and my electronic dosimeter (ED) system are
both calibrated using Cs-137 gamma with appropriate NIST traceability. The
ED system is used with an electronic access control system so that every
single controlled area entry has a discrete dose measurement for it with no
exception. That is, the secondary dosimetry system is comprehensive,
monitoring all occupational exposure along with the primary dosimeter.
But every quarter the collective dose total measured by the ED system is 10
percent smaller than the collective dose reported by the TLD system.
Quarter after quarter, very reproducible, even predictable. Undoubtedly
there is a systematic difference in the results from the 2 systems. Even
scanning the primary-secondary dosimeter comparison printout shows a
preponderance of negative biases on individual comparisons.
If I introduce a 10 percent upward bias in the calibration of my EDs, the
collective dose difference goes away, and my primary-secondary dosimeter
comparisons show as many positive differences and negative ones. Have I not
solved the systematic bias problem without actually knowing what caused it?
Most people would conclude that the revised calibration means that both
systems are producing acceptably alike numbers. And the system that was
deliberately biased is, after all, a SECONDARY dosimetry system, whose main
purpose is for real-time exposure control. The major fault to be avoided in
such a program would be meaningful systematic differences in the 2 systems.
There's a reason this may be the most reasonable approach to this problem.
EDs have a minimum reportable dose. When an ED is zeroed for each entry and
read out upon exiting, and there are 4-8 entries per day on several days a
week, a 1 mrem minimum reportable dose will result in a significant amount
of "lost" dose by the end of the quarter compared to the TLD. And there's
nothing that can be done the ED calibration process to correct this because
it isn't a measurement problem, it's a bookkeeping problem.
Any program choosing this approach should be able to achieve
primary-secondary dosimetry comparison results that will encourage trust
in the systems by the facility staff, too. They just need to document what
they've done and why completely and accurately.
===================================
Bob Flood
Dosimetry Group Leader
Stanford Linear Accelerator Center
(650) 926-3793
bflood@slac.stanford.edu
************************************************************************
The RADSAFE Frequently Asked Questions list, archives and subscription
information can be accessed at http://www.ehs.uiuc.edu/~rad/radsafe.html