Page 14 - 第四届运筹青年论坛会议手册-0615
P. 14

12    第四届中国运筹青年论坛  |  青年报告






                       Fractional-order Global Optimal BackPropagation Machine


                     Trained by Improved Fractional-order Steepest Descent Method




                                             王健     中国石油大学(华东)


                    This  paper  introduces  a  novel  fractional-order  branch  of  the  family  of  BackPropagation  Neural

                    Networks (BPNNs) trained by the improved Fractional-order Steepest Descent Method (FSDM); this
                    differs  from  the  majority  of  the  previous  classic  first-order  BPNNs  and  as  such  trained  by  the
                    traditional first-order steepest descent method. To improve the optimization performance of classic
                    first-order BPNNs, in this paper we study whether it could be possible to apply improved FSDM
                    based  on  fractional  calculus  to  generalize  classic  first-order  BPNNs  to  the  Fractional-order
                    Backpropagation Neural Networks (FBPNNs). Motivated by this inspiration, this paper proposes a

                    state-of-the-art application of fractional calculus to implement a FBPNN trained by an improved
                    FSDM whose reverse incremental search is in the negative directions of the approximate fractional-
                    order partial derivatives of the square error. At first, the theoretical concept of a FBPNN trained by
                    an improved FSDM is described mathematically. Then, the mathematical proof of the fractional-
                    order global optimal convergence, an assumption of the structure, and the fractional-order multi-
                    scale global optimization of a FBPNN trained by an improved FSDM are analysed in detail. Finally,
                    we perform comparative experiments and compare a FBPNN trained by an improved FSDM with a

                    classic  first-order  BPNN,  i.e.,  an  example  function  approximation,  fractional-order  multi-scale
                    global optimization, and two comparative performances with real data. The more efficient optimal
                    searching capability of the fractional-order multi-scale global optimization of a FBPNN trained by
                    an improved FSDM to determine the global optimal solution is the major advantage being superior
                    to a classic first-order B.
   9   10   11   12   13   14   15   16