Page 208 - Deep Learning
P. 208

The Growth of Competence                 191

               The hiatus in theorizing came to an end with a 1979 Psychological Review
                                           42
            paper by Simon and Yuichiro Anzai.  They presented a computer program
            that modeled the successive strategy changes of a single person who solved
            a problem-solving task multiple times. The paper demonstrated the feasi-
            bility of simulating the acquisition and not only the execution of cognitive
            skills. The paper was soon followed by the initial versions of J. R. Anderson’s
            ACt model and the Soar model proposed by newell, paul S. Rosenbloom
            and John E. Laird. 43, 44 The 1983 version of the ACt model included six dif-
            ferent learning mechanisms (proceduralization, rule composition, rule gen-
            eralization, rule discrimination, strengthening and weakening). The success
            of the initial models established computer simulation as a useful theoretical
            tool.
               The story since 1979 is one of proliferation. A wide variety of theories
            have been proposed and embodied in computer simulation models. typically,
            a model consists of a cognitive architecture that follows some theory of sta-
            ble behavior like the one presented earlier in this chapter, plus a repertoire
            of learning mechanisms. Models differ in the details of their performance
            mechanisms, but, more important, they incorporate different learning mech-
            anisms. The fact that theories are embedded in computer programs have the
            peculiar linguistic consequence that they tend to be known by the proper
            names of those programs instead of descriptive titles. Almost all serious mod-
            els are multimechanism models. For example, Ron Sun’s CLARION model
            learns through both bottom-up generation of rules and rule generalization,
            while VanLehn’s Cascade model learns from solved examples as well as by
            analogy. 45, 46 All in all, the emergence of computer simulation as a theoret-
            ical tool triggered an unprecedented explosion of the theoretical imagina-
            tion. More new hypotheses about the mechanisms behind the acquisition of
            cognitive skills were proposed in the period 1979–1999 than in the previous
            century. 47
               The journey from the Law of Effect to Cascade and CLARION represents
            a century of scientific progress. Theories of skill acquisition are more precisely
            formulated, more responsive to the complexity of human skill acquisition and
            more explanatory than they were a century ago. nevertheless, the historical
            trajectory through the space of theories looks in retrospect like a drunkard’s
            walk. The choice of mechanisms to include in any one model is not grounded
            in any principled argument about which repertoire of mechanisms is most
            likely to be the one that has in fact evolved in the human brain. For example,
            why did the ACt model of 1983 vintage not include any mechanism for learn-
            ing from examples, and why does Cascade lack a discrimination mechanism?
   203   204   205   206   207   208   209   210   211   212   213