radiosensitive and not radioresistant, which occurred in the 1950s when
thyroid cancers were produced fn animal models and when they were shown to be
produced in children years after x-irradiation of the head and neck, was there

more study of the late effects of radiofodine.
|

The significance of cesfum-137 was not appreciated until the 1950s, when
it was found in the environment.

Although limited research on radiocesium was

ene

done in the 1940s, it went through the pattern of testing--metabolism, acute

'

effects, late effects--later, and in a chronologically compacted time.

Risk Assessment

Acute responses to irradiation were fairly easy to describe and, probably,
'

to predict, in that high enough doses would elicit acute effects, and loner

ones would not.

“Safe” exposures were at levels less than tolerance doses,

that {s, levels at which recovery would occur.

.

Assessment of radiation risk in the 1950s was in the context of thresholds, injury and repair, and lifeshortening.

Blair (1962) discussed radta-

tion injury in terms of acute reparable and trreparadle injuries measured by
life shortening.

That low level radiation tnduced lifeshortening was

primarily a carcinogenesis effect was not generally recogntzed (FRC, 1960).
'

“Early” injury in exposed mammals was frequently measured by comparing preend post~exposure blood counts, a practice almost as widespread as the use of

film badges for dosimetry.
For low-level exposures to pose risks for latent health effects was not
consistent with the threshold, or tolerance, dose concept then in vogue.

At

that time genetic effects were thought to be without a threshold, or Ifnear in
response (NCRP, 1954), an opinion largely influencet by the early research or

Select target paragraph3