Page 179 - Data Science Algorithms in a Week
P. 179

The Estimation of Cutting Forces in the Turning of Inconel 718 Assisted …   163

                                    
                           v i k  1     v  k    1 , v  ,2 i  k    1 , ..., v  , i d  k  1     (10)
                                    ,1 i

                       to the current position xi(k):

                                         k
                           x i k    1      x i   + v i  k    1                           (11)

                          The components of vi(k+1), are computed as follows:

                                       
                                                                                     k
                                                         k
                                            k
                           v , i j  k    1       v  , i j     c r   ˆ y  , i j     x  , i j    c r   y j      x  , i j       (12)
                                                                k
                                                                              k
                                                 1 1, j
                                                                      2 2, j

                       where j designates component in the search space; ω represents the inertia weight which
                       decreases  linearly  from  1  to  near  0;  c1,  and  c2  are  cognitive  and  social  parameters,
                       respectively, known as learning factors; and r1,j and r2,j are random numbers uniformly
                       distributed  in  the  range  [0,  1].  The  inertia  weight  component  causes  the  particle  to
                       continue in the direction in which it was moving at iteration k. A large weight facilitates
                       global search, while a small one tends to facilitate fine tuning the current search area. The
                       cognitive term, associated with the experience of the particle, represents its previous best
                       position  and  provides  a  velocity  component  in  this  direction,  whereas  the  social  term
                       represents information about the best position of any particle in the neighborhood and
                       causes  movement  towards  this  particle.  These  two  parameters  are  not  critical  for  the
                       convergence of PSO, but fine tuning may result in faster convergence of algorithm and
                       alleviation  of  local  minima.  The  r1,j  and  r2,j  parameters  are  employed  to  maintain  the
                       diversity of population.
                          The  PSO  algorithm  shares  many  similarities  with  evolutionary  computation
                       techniques such as GA. PSO algorithm are also are initialized with a randomly created
                       population  of  potential  solutions  and  has  fitness  values  to  evaluate  the  population.
                       Furthermore,  both  algorithms  update  the  population  and  search  for  the  optimum  with
                       random techniques. However, unlike GA, PSO does not have operators such as mutation
                       and  crossover  which  exist  in  evolutionary  algorithms.  In  PSO  algorithm  potential
                       solutions (particles) are moving to the actual optimum in the solution space by following
                       their own experiences and the current best particles. Compared with GA, PSO has some
                       attractive  characteristics  such  are  its  memory  which  enables  it  to  retain  knowledge  of
                       good solutions by particles of the whole swarm, simultaneously search for an optima in
                       multiple  dimensions,  mechanism  of  constructive  cooperation  and  information-sharing
                       between  particles.  Due  to  its  simplicity,  robustness,  easy  implementation,  and  quick
                       convergence PSO optimization method has been successfully applied to a wide range of
                       applications. The focus of this study is to employ a PSO for optimization of the weights
                       and bias of the ANN model. The steps involved in process of ANN training using PSO
                       are shown in Table 4.
   174   175   176   177   178   179   180   181   182   183   184