Page 790 - The Toxicology of Fishes
P. 790

770                                                        The Toxicology of Fishes


                        Model error is a broad category that might also be thought of as knowledge uncertainty. These are
                       generally cases where the piece of information needed for the risk assessment is not directly measurable
                       and must be inferred or predicted from other information through the use of an actual mathematical
                       model, such as a bioaccumulation model, or in a more conceptual manner. All models have uncertainties,
                       and these uncertainties are therefore made part of the overall uncertainty in the risk assessment when
                       models are incorporated. It is common, for example, for a risk assessment to seek protection of a
                       particular species, but toxicity data for that species may be lacking and require extrapolation from data
                       for other species; assessments involving endangered species are a significant example, as toxicity data
                       are rarely available for endangered species. In this case, knowledge of the toxicology of the chemical
                       can help determine the most effective means of estimating the response of that species to the stressors
                       at hand. In other cases, chemicals of concern may include some for which few or no toxicity data are
                       available. In these cases, tools such as quantitative structure–activity relationships (QSARs) may be
                       useful to estimating the potency of a chemical or predicting species that may be at greatest risk, based
                       on data for other chemicals. The entire issue of extrapolation—among species, among chemicals, among
                       endpoints, among doses—is a major source of uncertainty in risk assessment and one where mechanistic
                       toxicology can be brought to bear most directly.
                        Another type of model error could be the conceptual uncertainty that results when the processes
                       involved in producing risk are incompletely understood. As an example, there is currently great uncer-
                       tainty regarding the degree to which exposure of fish to metals in the diet contributes to toxicity, how
                       to quantify that risk, and how to integrate it with risk from concurrent waterborne exposure (Meyer et
                       al., 2005). Another example is how levels of effect estimated from laboratory tests (e.g., 20% reduction
                       in growth) will affect populations of the same species in the field. Tremendous accuracy in determining
                       an LC  will be of relatively little help in most risk assessments if the means to relate that result to the
                           50
                       actual populations at risk are lacking. Also, risk assessments often focus on direct effects of chemical
                       exposure (e.g., direct mortality of toxicologically sensitive organisms) to the exclusion of indirect effects
                       (e.g., loss of important food resources), which are often more difficult to predict.
                        Model error is generally the most difficult type of uncertainty to address in a risk assessment. One
                       way in which model error can be evaluated is through the accuracy of predictions of the same model in
                       other situations where the actual outcomes are known. Another approach is to apply more than one
                       model to the problem and evaluate the concordance of the predictions. In some regulatory programs,
                       generic assessment factors are used, such as dividing an LC   by 1000 to account for unknown or
                                                                       50
                       unmeasured chronic effects.
                        It is common for risk assessors to address uncertainty by choosing environmentally conservative values
                       (i.e., those least likely to underestimate risk) to derive a worst-case scenario; for example, one might
                       choose the most sensitive toxicity test endpoint and compare it to the highest reported exposure con-
                       centration. To some degree this approach skirts the uncertainty issue, but it is often appealing because
                       it is expected to place the majority of the uncertainty to one side of the risk prediction by erring toward
                       the side of protection of the resource being assessed.  This approach has utility for screening-level
                       assessments—in essence, if the worst case does not show risk, then no more analysis is necessary.
                        There is a danger in this approach, as well. Risk assessments often combine results of multiple analyses
                       to characterize risk, and the uncertainties are often multiplied in the process. By combining multiple
                       assumptions that are all individually worst-case conditions, one can create an assessment that exaggerates
                       the actual risk. This is easy to imagine in terms of simple probabilities. If a 95% confidence level is
                       chosen as a worst case for a variable, then there is only a 5% chance that the true value is actually that
                       bad or worse. If three conditions are necessary to produce risk, and the value for each of these conditions
                       is chosen with a 95% confidence level (and the distributions are independent), then the probability that
                                                               3
                       all three worst-case conditions are exceeded is (0.05) , or 0.000125, which is likely beyond a reasonable
                       worst case in most instances.
                        One way to avoid this stacking of conservative assumptions is through the use of statistical simulations.
                       A commonly used approach is Monte Carlo analysis, a statistical simulation process that estimates the
                       likelihood of various outcomes based on variability in the input variables (e.g., exposure concentrations,
                       species sensitivity distributions). In this approach, each input variable to the risk calculation is described
                       not by a single value but by a statistical distribution (normal, log-normal, Poisson, etc.). For the analysis,
   785   786   787   788   789   790   791   792   793   794   795