[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Instrument Response / Training Issues
The Radiation Protection Manager here at VC Summer Nuclear Station, wants
>>a training class to be given on the following HP survey instruments:
A couple of quick thoughts:
REM Ball
I believe you'll find from NUREG docs that your average unscattered neutron
energy is approximately 100 keV and the multi-scattered flux is approximately
50 keV. I don't have any references with me, but I seem to recall that the
sources most often used to calibrate rem balls have neutron energies much
higher. In this case for each neutron detected, the pulse counter is
providing a signficant dose impact, whereas the true dose/unit flux in the
plant is not that much. Without any good comparison studies, I believe this
might be a primary contributor to the fact that a rem ball seems to read
signficantly higher than some of the new neutron electronic dosimeters with
their detector response factors set to the average neutron energy in the
field. There is also a significant variance in TLD response in neutron
fields. I believe one utility multiplies their rem ball reading by 0.3 to
get close to the neutron electronic dosimeter reading. This is an area where
a comparison to the rem ball really hasn't been available and further
industry data should prove to be interesting. I'm no longer at the plant
with the electronic neutron dosimetry and wish I had more to get to the
bottom of the issue...
Teletector
The teletector is a pulse counter that assigns 662 keV of dose to each
counting event whereas the ion chamber measures the dose response of all of
the energies. I don't have my teletector in N-16 info available, but I have
seen data from U of Mass at Lowell with I believe was the MG Ram Ion ion
chamber. I believe the density thickness for this ion chamber was
approximately 300-400 mg/cm^2 and the response was within approximately 10%
of the expected value. If this low level of density thickness yields a
comparison this close, then I'd expect the RO-2 chamber thickness to be this
much or greater and yield similarly good comparisons. This essentially says,
there's enough density thickness in the chamber walls to achieve a decent
electonic equilibrium.
PCM 1B/2
I belive you'll find that the tables for false alarm are for a single
detector, so you'd effectively multiply that value by the number of detectors
to get the overall false alarm rate for the monitor. You can also probably
get calculators from the major vendors to play what-if scenarios to help you
better understand the performance of the monitor. Sometimes you need better
tools to obtain the higher level of knowledge. One thing to cover is
detector response vs. beta energy. Higher energy betas are easier to detect.
So, if you have failed fuel and your average energy goes from ~96 keV for
Co-60 to 200-300 keV, your monitor set at 5000 dpm will alarm at lower levels
because these energies are more easily detected. Radon descriminating
mechanisms that work perfect almost sound to good to be true... Think
carefully about the body geometry and distances from the detectors vs. alpha
range in air and the potential for licensed alpha to be present. The EPRI
doc concerning alpha at commercial nuclear power facilities suggests that you
only need alpha monitoring if your beta/alpha ratio is <50:1. This is
equivalent to 5000 dpm/100 cm^2 fixed + removable beta and 100 dpm/100 cm^2
alpha. Additionally this would be 1000 dpm/100 cm^2 removable and 20 dpm/100
cm^2 alpha. The last value I recall seeing from INPO was a sensitivity of
300 dpm for a personnel monitor. Performance data will show you that 20 dpm
is essentially unobtainable. Remember it takes a SAC-4 (30% eff ZnS)
approximately a minute to achieve an MDA of <20 dpm. With respect to
performing plateau's for all of the detectors, once I got the monitor set up,
I'd only perform a plateau if the efficiency for a detector during the cal
was outside your admin variance against the mean. You'd also perform one
when you replaced a detector. If you read the plateau you can put in a new
detector and tune it to the performance of the others just from the plateau
data. The key is you'll have a "worst" detector that has the worst
signal-to-noise or efficiency to background and this detector will
essentially drive the count time for the whole monitor. I have a methodology
for finding this detector and tuning the others around this detector to get
the lowest background with the highest average efficiency with a tight fit.
A tight fit for the average is also important or you'll get false alarms on
the one with an efficiency that is too high.
PM-7
The PM-7 is a very predictable instrument. It can be made to work in high
background levels (>13,000 cps) and it will appropriately take itself out of
service if the background fluctuates too much. The shield factors in the
instrument might run 6% or so which means at the end of the count cycle, the
counts are inflated by 6%. If you aren't the same size as the person that
generated those values, you're susceptible to excessive false alarm rates in
higher background areas. In general adjust the false alarm probability as
high as you can and still obtain a count time <2 sec. The longer the count
time in high and changing background levels, the more the count will be
affected. Minimize the count time and accept higher RDA values to get good
performance in high background areas. Essentially everyone is dragging these
monitors deeper into the plant into increasingly higher background levels and
it forces us to learn more about operating in adverse environments. Most
plants pick their calibration source Co-60 or Cs-137 to be the one most close
to the average photon energy. Think about the photon emission rate of your
plant mix vs. the source your using. Additionally, more work needs to be
done to understand the impact of performing a plateau to set a detector
voltage on a fixed descriminator and variable detector gain system with
different photon energies. Where do the photons go with energies that are
below the "knee"? I believe work in this area will generate some interesting
insights... Remember David Cardine in the Kung Fu series when the master
would offer an intriguing riddle which held much reward?... These
philosophies would similarly apply to tool monitors as well.
AMS-4
Very reliable instrument. Field operations will match the performance
formulas in the manual to a "T". For example the time needed for the monitor
to go into service relative to the background level and alarm setpoint will
generate curve fits of plant data that match the manual. The primary issue
to remember is that the bkg/source detector responses vary in the z plane.
This condition can cause an under-response or over response. For example,
the draining of a reactor cavity will create a difference in the flux
gradient and cause the background subtraction factor to change. Reset the
value periodically for changes in plant conditions like this to help you
"see" the effect. The key it to understand the personality of the monitor.
It is not likely that a large group of rad techs will ever interface with the
monitor enough to be good at diagnosing issues in the field. This is where
simplified written guidance is key to monitoring success. The AMS-4 could
take up much more space, but it's time to go...
As you can tell, I don't have much of an interest in instruments...
Glen
************************************************************************
You are currently subscribed to the Radsafe mailing list. To unsubscribe,
send an e-mail to Majordomo@list.vanderbilt.edu Put the text "unsubscribe
radsafe" (no quote marks) in the body of the e-mail, with no subject line.
You can view the Radsafe archives at http://www.vanderbilt.edu/radsafe/