Page 344 - Data Science Algorithms in a Week
P. 344

Explorations of the ‘Transhuman’ Dimension of Artificial Intelligence   325

                          For Gelernter, therefore, contemporary AI-research or “computationalism”, disregarding
                       the  other  (inalienable)  mental  focus-levels  that  humans  are  privy  to,  is  preoccupied  with
                       rational thought, or “intelligence”, precisely, which is why they believe “that minds relate to
                       brains  as  software  relates  to  computers”.  (Gelernter  2016:  xviii-xix).  He  compares  current
                       research  on  the  mind  to  dozens  of  archaeological  teams  working  on  the  site  of  a  newly
                       discovered ancient temple, describing, measuring and photographing every part of it as part of
                       a process that, they believe, will eventually result in a conclusive report embodying the ‘truth’
                       about its properties. He disagrees with such an approach, however (Gelernter 2016: 1):

                              But this is all wrong. The mind changes constantly on a regular, predictable basis.
                          You can’t even see its developing shape unless you look down from far overhead. You
                          must  know,  to  start,  the  overall  shape  of  what  you  deal  with  in  space  and  time,  its
                          architecture and its patters of change. The important features all change together. The
                          role of emotion in thought, our use of memory, the nature of understanding, the quality of
                          consciousness  –  all  change  continuously  throughout  the  day,  as  we  sweep  down  a
                          spectrum  that  is  crucial  to  nearly  everything  about  the  mind  and  thought  and
                          consciousness.

                          It  is  this  “spectrum”,  in  terms  of  which  Gelernter  interprets  the  human  mind,  that
                       constitutes  the  unassailable  rock  against  which  the  reductive  efforts  on  the  part  of
                       “computationalists”, to map the mind exhaustively at only one of the levels comprising its
                       overall  “spectrum”,  shatter.  This  is  particularly  the  case  because  of  their  hopelessly
                       inadequate attempt to grasp the relationship between the mind and the brain on the basis of
                       the relation between software and hardware in computers.
                          In an essay on the significance of Gelernter’s work, David Von Drehle (2016: 35-39)
                       places  it  in  the  context  of  largely  optimistic  contemporary  AI-research,  pointing  out  that
                       Google’s  Ray  Kurzweil  as  well  as  Sam  Altman  (president  of  Startup  Incubator  Y
                       Combinator),  believe  that  the  future  development  of  AI  can  only  benefit  humankind.  One
                       should  not  overlook  the  fact,  however,  Von  Drehle  reminds  one,  that  there  are  prominent
                       figures at the other end of the spectrum, such as physicist Stephen Hawking and engineer-
                       entrepreneur Elon Musk, who believe that AI poses the “biggest existential threat” to humans.
                       Gelernter – a stubbornly independent thinker, like a true philosopher (he has published on
                       computer science, popular culture, religion, psychology and history, and he is a productive
                       artist) – fits into neither of these categories. It is not difficult to grasp Hawking and Musk’s
                       techno-pessimism, however, if Gelernter’s assessment of AI as the development of precisely
                       those aspects of the mind-spectrum that exclude affective states is kept in mind – what reason
                       does one have to believe  that coldly ‘rational’, calculative AI would have compassion for
                       human  beings?  Reminiscent  of  Merleau-Ponty,  the  philosopher  of  embodied  perception,
                       Gelernter insists that one cannot (and should not) avoid the problem of accounting for the
                       human body when conceiving of artificial intelligence, as computer scientists have tended to
                       do since 1950, when Alan Turing deliberately “pushed it to one side” (Von Drehle 2016: 36)
                       because  it  was  just  too  “daunting”.  For  Gelernter,  accounting  for  the  human  body  means
   339   340   341   342   343   344   345   346   347   348   349