Page 457 - Deep Learning
P. 457

440                    Notes to Pages 264–276

              9.  Siegler (1996, Fig. 4.4) and Siegler and Jenkins (1989).
              10.  Delaney, Reder, Staszewski and Ritter (1998) present evidence that improvement
                follows a power law both before and after a strategy discovery, but power laws
                with different slopes.
              11.  Ohlsson (1992e).
              12.  Crossman (1959, pp. 153–156).
              13.  The estimate for number of chess chunks is often stated as 50,000 units, chunks,
                of chess knowledge, typically supported by a reference to a 1973 paper by William
                G. Chase and Herbert A. Simon. These two authors published two papers in 1973
                that give different estimates. Chase and Simon (1973a, p. 402) give the estimate
                of 50,000 units, while Chase and Simon (1973b, p. 249) instead advanced the
                much less precise estimate of 10,000 to 100,000 chunks. The latter estimate is
                not original but is based on two prior papers by Simon and Barenfeld (1969, pp.
                481–482) and Simon and Gilmartin (1973, pp. 38–43). The estimations in these
                two papers are more thorough than those reported in Chase and Simon (1973a,
                1973b). The main method of estimation is to create a chess-playing program with
                a database of chess chunks, measure its performance as a function of the number
                of chunks, and then extrapolate how much larger its database would have to be
                for the program to perform like a world-class player. This method of estimation
                presupposes that the program plays chess (or performs other chess-related tasks)
                in at least approximately the same way as the human players; put differently, it
                assumes that the theory of chess playing embodied in the program is approxi-
                mately correct.
              14.  Miller (1996).
              15.  Felgenbaum(1989).
              16.  For example, the SQL-Tutor for teaching elementary database skills has more
                than 600 constraints (Mitrovic, Martin & Mayo, 2002).
              17.  See Note 3, this chapter, for references.
              18.  See Martin (1980) and Perry (1984) for blow-by-blow descriptions of the Three
                Mile Island accident, and Vaughan (1997) for a similarly dense description of
                the Challenger explosion. The analyses of the causes of the Hindenburg and the
                Titanic tragedies continue (Bain & van Vorst, 1999; Garzke, Foecke, Matthias
                & Wood, 2000; Matsen, 2008). See Franzén (1960) and Kvarning and Ohrelius
                (1992) regarding the warship Wasa. Petroski (1992) describes both the walkway
                collapse at the Kansas City Hyatt Regency Hotel (Chapter 8) and the collapse of
                the Tacoma Narrows Bridge (pp. 164–171). Smith and Alexander (1999) tell the
                story of how Xerox fumbled the future. The disastrous Operation Market Garden
                is described in Ryan (1974). Practice does not necessarily offer protection against
                errors in handling complex technologies (Youmans & Ohlsson, 2008).
              19.  Leplat and Rasmussen (1984), Norman (1981) and Reason (1990). “No one wants
                to learn by mistakes, but we cannot learn enough from successes to go beyond
                the state of the art [in engineering]” (Petroski, 1992, p. 62, italics in original).
              20.  Carroll and Mui (2008).
              21.  See Petroski (1992, 2006) for analyses of design errors in a variety of engineer-
                ing  systems.  “Past  successes,  no  matter  how  numerous  and  universal,  are  no
                guarantee of future performance in a new context” (Petroski, 2006, p. 3). See
   452   453   454   455   456   457   458   459   460   461   462