Page 171 - Data Science Algorithms in a Week
P. 171

The Estimation of Cutting Forces in the Turning of Inconel 718 Assisted …   155

                          The  first  step  in  developing  of  ANN  is  selection  of  data  for  training  and  testing
                       network.  The  number  of  training  and  testing  samples  were  18  and  9,  respectively,  as
                       shown in Table 2. Then, all data were normalized within the range of ±1 before training
                       and testing ANN. The ANN model, using the BP learning method, required training in
                       order  to  build  strong  links  between  layers  and  neurons.  The  training  is  initialized  by
                       assigning some random weights and biases to all interconnected neurons.
                                                                        hid
                          The output of the k-th neuron in the hidden layer Ok  is define as,

                                       1
                           O hid                                                                  (1)
                            k            I   hid  
                                  
                                1 exp    (1)   k
                                         T  

                          with


                                N inp
                                    (1)
                                             (1)
                           I k hid     w O inp    b                                              (2)
                                        j
                                    jk
                                             k
                                 j 1

                              inp
                       where N  is the number of elements in the input, wjk  is the connection weight of the
                                                                       (1)
                       synapse between the j-th neuron in the input layer and the k-th neuron in the hidden layer,
                         inp
                       Oj  is the input data, bk  is the bias in the k-th neuron of the hidden layer and T  is a
                                             (1)
                                                                                                (1)
                       scaling parameter.
                                                                 out
                          Similarly, the value of the output neuron Ok  is defined as,

                                      1
                           O out                                                                   (3)
                            k           I   out  
                                 
                                1 exp   (2)   k
                                        T  

                          with

                                N  hid
                                    (2)
                           I k out      w O i hid    b k (2)                                    (4)
                                    ik
                                i 1

                              hid
                                                                            (2)
                       where N  is the number of neurons in the hidden layer, wik  is the connection weight of
                       the synapse between the i-th neuron in the hidden layer and the k-th neuron in the output
                       layer, bk  is the bias in the k-th neuron of the output layer and T  is a scaling parameter
                                                                                (2)
                              (2)
                       for output layer.
                          During training, the output from ANN is compared with the measured output and the
                       mean relative error is calculated as:
   166   167   168   169   170   171   172   173   174   175   176