Page 170 - Data Science Algorithms in a Week
P. 170
154 Djordje Cica and Davorin Kramar
(2)
(1)
hidden and output layers, bk and bk , respectively, are controlled during data
processing
Before practical application, ANN need to be trained. Training or learning as often
referred is achieved by minimizing the sum of square error between the predicted output
and the actual output of the ANN, by continuously adjusting and finally determining the
weights connecting neurons in adjacent layers. There are several learning algorithms in
ANN and back-propagation (BP) is the most currently the most popular training method
where the weights of the network are adjusted according to error correction learning rule.
Basically, the BP algorithm consists two phases of data flow through the different layers
of the network: forward and backward. First, the input pattern is propagated from the
input layer to the output layer and, as a result of this forward flow of data, it produces an
actual output. Then, in backward flow of data, the error signals resulting from any
difference between the desired and outputs obtained in the forward phase are back-
propagated from the output layer to the previous layers for them to update weights and
biases of each node until the input layer is reached, until the error falls within a
prescribed value.
In this paper, a multilayer feed-forward ANN architecture, trained using a BP
algorithm, was employed to develop cutting forces predictive model in machining
Inconel 718 under HPC conditions. An ANN is made of three types of layers: input,
hidden, and output layers. Network structure consists of five neurons in the input layer
(corresponding to five inputs: diameter of the nozzle, distance between the impact point
of the jet and the cutting edge, pressure of the jet, cutting speed, and feed) and one neuron
in the output layer (corresponding to cutting force component). Cutting force Fc, feed
force Ff and passive force Fp predictions were performed separately by designing single
output of neural network, because this approach decreases the size of ANN and enables
faster convergence and better prediction capability. Figure 2 shows the architecture of the
ANN together with the input and output parameters.
Figure 2. Artificial neural network architecture.