Page 788 - The Toxicology of Fishes
P. 788
768 The Toxicology of Fishes
properties. Many assessments of this type have large uncertainty because of the limited data but are still
effective because they can flag chemicals that either clearly pose substantial potential risk or have a low
probability of posing risk if they are developed and used.
Ecological risk assessments conducted in support of contaminated site assessment and remediation
generally contain elements of both retrospective and prospective risk assessment; for example, con-
taminated site assessments under the USEPA Superfund program proceed in two phases: the remedial
investigation and the feasibility study. The remedial investigation is a retrospective risk assessment
that quantifies the environmental risks posed by current conditions and defines the relationship between
the contaminants of concern at the site and the biological effects they cause. The latter is critical to
properly evaluating risk management options; if a stream has a severely degraded aquatic community,
but stressors beyond site contaminants contribute to that degradation, the relative roles of these stressors
will be a factor in deciding what impact site remediation might have. In the feasibility study phase,
different approaches for reducing exposure to contaminants and the associated risks are evaluated. For
example, for a site with contaminated aquatic sediments, alternatives might include dredging with
disposal in an offsite, lined landfill; capping the sediments in place with clean material; or allowing
natural chemical breakdown and burial processes to reduce exposure over time. In addition to evaluating
the costs, logistics, and impacts of these alternatives, risk assessment is used in a prospective way to
forecast the changes in risk that would occur as a result of each action, along with any new risks that
might be introduced. The site manager then conducts a cost–benefit analysis to determine which
remedial alternative has the combination of costs, impacts, and risk reduction most appropriate for the
site (risk management).
Uncertainty in Risk Assessment
All risk assessments involve uncertainties of many types, and evaluating and communicating uncertainty
are important but often difficult components of risk assessment; this topic alone is the subject of entire
books (Warren-Hicks and Moore, 1998). Different authors have used different schemes for aggregating
or parsing the many types of uncertainty using different descriptors. For purposes of this discussion, we
will loosely categorize these into: (1) natural stochasticity, (2) parameter error, and (3) model error
(Suter, 1993; Suter et al., 1987). Natural stochasticity refers to the natural variation in parameters within
the assessment. Examples include variation in chemical exposure caused by natural phenomena (e.g.,
rainfall events) or differences in sensitivity among individuals in a population. Because these are intrinsic
properties of systems, the goal is not so much to eliminate this uncertainty as it is to effectively
characterize and incorporate it; for example, rather than representing exposures as single values (e.g.,
mean exposure concentration), probabilistic models can be used to represent exposures as a distribution
of exposures likely to exist over time. Likewise, rather than expressing the potency of a chemical exposure
as a single point estimate (e.g., LC , which is the concentration estimated to cause lethality in 50% of
50
the test population), probabilistic models can be used to predict the percentage of fish that are expected
to be affected at any particular exposure concentration. Some of the research needs in this area include
developing toxicity testing and analysis methods that make greater use of the entire exposure–response
curve (see Figure 18.6). Further, because exposures in most toxicity tests are constant and those in nature
are generally variable (sometimes greatly so), methods that can predict response during fluctuating
exposures would be particularly valuable. This is especially true for sublethal effects of long-term
exposures for which very little is known about the effects of fluctuating exposure.
Parameter error is uncertainty about parameters that have true values, but there are uncertainties in
the means by which we measure or estimate those values. This can include random error in a sampling
technique or measurement, which might be addressed in part through the use of replication. It can also
include situations in which a measurement or parameter estimated is biased, such as might occur if there
were an undetected effect of a sample matrix on a chemical analysis. These types of uncertainty are
often addressed through quality assurance or quality control procedures, such as the analysis of standard
reference materials or matrix spikes. Another approach is to make parallel measurements using different
techniques that would not be subject to the same biases.