Page 302 - Deep Learning
P. 302
Error Correction in Context 285
states can be recognized as such before they erupt into disasters. For example,
Carroll and Mui recommend that business managers who are considering an
acquisition or a merger put together a review board to make an independent
review of the relevant factors and to write a report well before the papers are
signed. They suggest the appointment of a devil’s advocate – once upon a time
an honored role in decisions by the Catholic church to confer sainthood – to
critique a business deal before it is sealed. Continuous monitors on hospi-
tal patients and advance intelligence about the battlefield for soldiers play the
same role in those domains of experience.
Providing additional sources of information about system states is useful,
but it does not by itself address the generic problem, emphasized by Reason
and others, that a complex system can fail simultaneously in more than one
way, and it is, in principle, impossible to compute ahead of time all the var-
ious consequences of every possible combination of failures on the system
indicators. In this case, combinatorics work against us: If there are 100,000
system components, there are 100,000 or 10,000,000,000 possible two-
2
component failure states. It is obviously impossible to list the symptoms of
each such failure type ahead of time. A similar situation holds with respect
to other complex systems: How many aspects of patient care can go wrong at
the same time in a hospital, and how many details are there to be considered
in a mega-merger between two corporations? But if the failure states cannot
be anticipated or listed ahead of time, how can the operators recognize and
interpret them when they occur? The Three Mile Island incident is the type
specimen for this problem, but it is potentially a matter of concern for any
complex system.
The problem is structurally similar to the problem of automatically diag-
nosing student misconceptions in intelligent tutoring systems, educational
software systems that use Artificial Intelligence techniques to provide indi-
vidual instruction: The universe of possible misunderstandings is too vast
to list all the possible incorrect representations of even a modestly complex
subject matter. Constraint-based modeling provides a workable solution to
this problem. The constraint base for an intelligent tutoring system does
29
not list possible student errors but states the constraints that specify what is
correct for the domain. It thereby indirectly specifies the universe of all pos-
sible errors: The latter is the set of all ways in which the constraints can be
violated. The analogical situation with a large space of pre-failure states sug-
gests the possibility that constructing a constraint base for a complex system
might enable a similar solution: The set of constraints on proper functioning
of the system implicitly specifies all the ways in which things can go wrong.