[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Measurement uncertainty of home radiation monitors



IMHO, GM's are quite reliable and trustworthy. The ones in 'toys' often

being exactly the same you find in 'high class' instruments.



The discharge process in GM's is based on a well understood physical effect,

and depends on wall thickness and composition, gas composition, and applied

voltage.

As long as wall and gas stay the same (that is, the tube does not leak), you

have the same sensitivity if the voltage stays the same. That makes GM's

quite interchangeable.

Voltage dependency is (correct me if I am wrong) some 1% to 3% for a 5% V

change around the center of the plateau. It is simple and inexpensive to

implement voltage regulation well below 1%, so that is not a problem in any

modern instrument.

Temperature effects are minimal, and nobody - as far as I know - accounts

for them in GM electronics (if somebody does, please let me know).

In conclusion, GM's do not drift - as long as you stay below a few billion

total counts, say the manufacturers. At low count rates, they are

intrinsecally linear.



OTH, as Mark hints, GM's preferred mode of failure, leakage, is evidenced in

an increase in dark count rate. Power supply failures usually result in no

HV - no counts - but sometimes in ringing or poor regulation, or spurious

counts in case of spark discharges (humidity).



Now, I am the first to say that periodic checks and (maybe, infrequent)

calibrations are needed; a number of accidents were traced to malfunctioning

detectors. A meter can be kaput and still report background counts: that can

happen (has happened) especially in PMT (scintillator) assemblies: when the

NaI or the plastic are gone (detached), you still get the PMT background.



Particularly important, users should be aware that Geigers read zero when

flooded with sufficiently strong sources. Same for scintillators, although

in some cases manufacturers correct that there.



The calibration issue is quite different with scintillator or semiconductor

detectors: there, you have pulses of varied height, and your count rate

depends on the threshold for peak discrimination, which is kind of delicate,

especially if you consider that, for example, in a 12 stages PMT pulse

height is proportional to applied voltage to the power of 12. You must

calibrate periodically and with low energy gammas (Am241) if you want to use

scintillators to monitor environmental radioactivity. Unless you do

spectroscopy, scintillators are a waste of money and a constant worry.



So, how precise is a GM? in my opinion, 5% is a quite conservative figure,

IF you perform a calibration once with a known and weak source, and IF you

only measure gammas. If you don't calibrate, I would trust 'nominal'

manufacturers data for gamma sensitivity to be within 10% of true values.

Betas and alphas absorption depends too much on window thickness, and that

is too uncertain.



In my experience, the problem in calibration is not in getting a known

source, is in shielding well enough to be able to make a zero reading. I am

under the impression that in many if not most cases calibration is made

<assuming> that detector background is zero (that is, is made on one point

only). I have read GM manufacturers specifying device backgrounds at level

equivalent to << 10% of normal environmental background, so that COULD be a

cause of systematic overreporting (or variance) of environmental

radioactivity levels. In other words, part of reported count rate, at low

levels, could be instrumental, and not accounted for by calibration (LNT is

wrong again!). Now that I think about it, I have never seen firmware

allowing to subtract a fixed (variable) instrumental background from a count

rate.



And how accurate is a GM? one sigma is the inverse of the square root of the

counts you have got. For 1% standard deviation accuracy you must collect ten

thousand counts.



And if I have confused again precision with accuracy, please correct and

forgive, but don't shoot me. And sorry if I make it look complicated, it

seems so simple...





Marco



Dr Marco Caceci

Principal

LQC s.l.

Noorderbrink 26

2553 GB Den Haag

The Netherlands

Tel + 31 70 397 5653

http://radal.com



-----Original Message-----

From: owner-radsafe@list.vanderbilt.edu

[mailto:owner-radsafe@list.vanderbilt.edu]On Behalf Of mark.hogue@SRS.GOV

Sent: 11 March, 2002 12:25

To: radsafe@list.vanderbilt.edu

Subject: Measurement uncertainty of home radiation monitors



There have been a number of links posted to hobby-level radiation monitors

on the web.

...........

How much confidence can we have in these monitors? I'd like to think that

the monitors will work fine until the tube fails and that failure will be

obvious. But if that's true, why do we have to do so many calibrations and

source checks on monitors in industry? (I have seen a monitor failure at a

nuclear facility result in a steady 300 cpm and that was not an obvious

failure. That was a G-M tube with a count rate meter and I think the fault

was in the tube.)



More to the point, is the original calibration valid in the range of 5 to 20

microR/hr that is the normal reported reading? The vendor claims (below) an

accuracy of +/-5%, but is imprecise in what that means (e.g. range of the

monitor?). Or, should we believe that a reading of 5 microR/hr is accurate

to +/- 0.25 microR/hr?!

...........



Mark Hogue



************************************************************************

You are currently subscribed to the Radsafe mailing list. To unsubscribe,

send an e-mail to Majordomo@list.vanderbilt.edu  Put the text "unsubscribe

radsafe" (no quote marks) in the body of the e-mail, with no subject line.

You can view the Radsafe archives at http://www.vanderbilt.edu/radsafe/