standards such as those available from the National Bureau of Standards and from the
International Atomic Energy Agency are two
laboratory's work can be shown.
methods by which the quality of a
In a large-scale survey such as the Northern Marshall
Islands program where samples are analyzed by several laboratories, it is all the more
important to assess the validity of the data by regularly having the participating
laboratories analyze blind quality-control (QC) standards.
For this program we have selected three criteria for the analytical reliability of the
data.
(1)
The first criterion places limits of acceptability on counting errors.
Because
radioactive decay is a statistical process, sufficient counts must be collected to
provide a level of confidence that the number reported is a true measure of the
radioactivity of the sample. Until this criterion is met it is difficult, if not
impossible, to evaluate the data for the remaining two criteria. Consequently,
we established a set of acceptable counting errors (Table 1). The requirements
were scaled to the total radioactivity of the sample, which is the product of the
amount of sample available and its specific activity (activity per unit weight of
sample). Compliance could be easily checked by the individual analyst because
it is based on information available to him: the measured specific activity and
weight of the sample received. This criterion was developed prior to initiation
of the NMIRS field-sample collection program to estimate the amount of
samples required by any competent contractor to measure worldwide fallout.
Samples of sufficient size with higher activity were thus well above the limits
of detection of the contracting laboratories. This was done to avoid reporting
machine limits that give only upper limits to the concentrations of the samples
and thus will overestimate the amount of radioactivity actually present in the
environment when these limits are used as real values.
This is not an
uncommon practice when assessing environmentaldata.
(2)
The second criterion required that the laboratories reproduce their results on
replicate analyses.
A set of blind duplicate samples was included with each
group of roughly 100 samples (called DCD for the accompanying Delivery
Control
Document) and results of the pair of analyses
were
considered
acceptable if they agreed within twice the measurement accuracy required in
Table 1. Satisfactory performance on duplicates required acceptability on 80%
of all duplicate pairs included in each DCD. Duplicate samples were prepared
and distributed by LLNL.