Page 144 - Data Science Algorithms in a Week
P. 144
128 Alfonso T. Sarmiento and Edgar Gutierrez
Among the advantages of PSO, it can be mentioned that PSO is conceptually simple
and can be implemented in a few lines of code. In comparison with other stochastic
optimization techniques like GA or simulated annealing, PSO has fewer complicated
operations and fewer defining parameters (Cui and Weile, 2005). PSO has been shown to
be effective in optimizing difficult multidimensional discontinuous problems in a variety
of fields (Eberhart and Shi, 1998), and it is also very effective in solving minimax
problems (Laskari et al. 2002). According to Schutte and Groenwold (2005), a drawback
of the original PSO algorithm proposed by Kennedy and Eberhart lies in that although the
algorithm is known to quickly converge to the approximate region of the global
minimum; however, it does not maintain this efficiency when entering the stage where a
refined local search is required to find the minimum exactly. To overcome this
shortcoming, variations of the original PSO algorithm that employ methods with adaptive
parameters have been proposed (Shi and Eberhart 1998, 2001; Clerc, 1999).
Comparison on the performance of GA and PSO, when solving different optimization
problems, is mentioned in the literature. Hassan et al. (2005) compared the performance
of both algorithms using a benchmark test of problems. The analysis shows that PSO is
more efficient than GA in terms of computational effort when applied to unconstrained
nonlinear problems with continuous variables. The computational savings offered by
PSO over GA are not very significant when used to solve constrained nonlinear problems
with discrete or continuous variables. Jones (2005) chose the identification of model
parameters for control systems as the problem area for the comparison. He indicates that
in terms of computational effort, the GA approach is faster, although it should be noted
that neither algorithm takes an unacceptably long time to determine their results.
With respect to accuracy of model parameters, the GA determines values which are
closer to the known ones than does the PSO. Moreover, the GA seems to arrive at its final
parameter values in fewer generations that the PSO. Lee et al. (2005) selected the return
evaluation in stock market as the scenario for comparing GA and PSO. They show that
PSO shares the ability of GA to handle arbitrary nonlinear functions, but PSO can reach
the global optimal value with less iteration that GA. When finding technical trading rules,
PSO is more efficient than GA too. Clow and White (2004) compared the performance of
GA and PSO when used to train artificial neural networks (weight optimization problem).
They show that PSO is superior for this application, training networks faster and more
accurately than GA does, once properly optimized.
From the literature presented above, it is shown that PSO combined with simulation
optimization is a very efficient technique that can be implemented and applied easily to
solve various function optimization problems. Thus, this approach can be extended to the
SCM area to search for policies using an objective function defined on a general
stabilization concept like the one that is presented in this work.