589
Views
42
Downloads
5
Crossref
N/A
WoS
5
Scopus
N/A
CSCD
Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.
Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.
A. Slowik and H. Kwasnicka, Evolutionary algorithms and their applications to engineering problems, Neural Computing and Applications, vol. 32, no. 16, pp. 12363–12379, 2020.
E. H. Houssein, A. G. Gad, K. Hussain, and P. N. Suganthan, Major advances in particle swarm optimization: Theory, analysis, and application, Swarm and Evolutionary Computation, vol. 63, p. 100868, 2021.
G. D’Angelo and F. Palmieri, GGA: A modified genetic algorithm with gradient-based local search for solving constrained optimization problems, Information Sciences, vol. 547, pp. 136–162, 2021.
Bilal, M. Pant, H. Zaheer, L. Garcia-Hernandez, and A. Abraham, Differential evolution: A review of more than two decades of research, Engineering Applications of Artificial Intelligence, vol. 90, p. 103479, 2020.
W. Li, X. Meng, and Y. Huang, Fitness distance correlation and mixed search strategy for differential evolution, Neurocomputing, vol. 458, pp. 514–525, 2021.
W. Li, W. Li, and Y. Huang, Enhancing firefly algorithm with dual-population topology coevolution, Mathematics, vol. 10, no. 9, p. 1564, 2022.
H. Liu, X. -W. Zhang, and L. -P. Tu, A modified particle swarm optimization using adaptive strategy, Expert Systems with Applications, vol. 152, p. 113353, 2020.
M. Taherkhani and R. Safabakhsh, A novel stability-based adaptive inertia weight for particle swarm optimization, Applied Soft Computing, vol. 38, pp. 281–295, 2016.
M. R. Tanweer, S. Suresh, and N. Sundararajan, Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems, Information Sciences, vol. 326, pp. 1–24, 2016.
W. Li, B. Sun, Y. Huang, and S. Mahmoodi, Adaptive complex network topology with fitness distance correlation framework for particle swarm optimization, International Journal of Intelligent Systems, vol. 37, no. 8, pp. 5217–5247, 2022.
X. Li, Niching without niching parameters: Particle swarm optimization using a ring topology, IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 150–169, 2009.
X. Xia, L. Gui, and Z. -H. Zhan, A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting, Applied Soft Computing, vol. 67, pp. 126–140, 2018.
X. Tao, W. Guo, X. Li, Q. He, R. Liu, and J. Zou, Fitness peak clustering based dynamic multi-swarm particle swarm optimization with enhanced learning strategy, Expert Systems with Applications, vol. 191, p. 116301, 2022.
R. Cheng and Y. Jin, A social learning particle swarm optimization algorithm for scalable optimization, Information Sciences, vol. 291, pp. 43–60, 2015.
J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
B. Liang, Y. Zhao, and Y. Li, A hybrid particle swarm optimization with crisscross learning strategy, Engineering Applications of Artificial Intelligence, vol. 105, p. 104418, 2021.
X. Xia, L. Gui, G. He, B. Wei, Y. Zhang, F. Yu, H. Wu, and Z. -H. Zhan, An expanded particle swarm optimization based on multi-exemplar and forgetting ability, Information Sciences, vol. 508, pp. 105–120, 2020.
Y. J. Gong, J. J. Li, Y. Zhou, Y. Li, H. S. H. Chung, Y. H. Shi, and J. Zhang, Genetic learning particle swarm optimization, IEEE Transactions on Cybernetics, vol. 46, no. 10, pp. 2277–2290, 2015.
X. Chen, H. Tianfield, and W. Du, Bee-foraging learning particle swarm optimization, Applied Soft Computing, vol. 102, p. 107134, 2021.
Z. Li, W. Wang, Y. Yan, and Z. Li, PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems, Expert Systems with Applications, vol. 42, no. 22, pp. 8881–8895, 2015.
J. B. Fisher and R. A. Hinde, Opening of milk bottles by birds, Brit. Birds, vol. 42, pp. 347–357, 1949.
W. -Y. Chiu, G. G. Yen, and T. -K. Juan, Minimum Manhattan distance approach to multiple criteria decision making in multiobjective optimization problems, IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 972–985, 2016.
T. Li, J. Shi, W. Deng, and Z. Hu, Pyramid particle swarm optimization with novel strategies of competition and cooperation, Applied Soft Computing, vol. 121, p. 108731, 2022.
S. M. A. Pahnehkolaei, A. Alfi, and J. T. Machado, Analytical stability analysis of the fractional-order particle swarm optimization algorithm, Chaos,Solitons &Fractals, vol. 155, p. 111658, 2022.
N. Lynn and P. N. Suganthan, Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm and Evolutionary Computation, vol. 24, pp. 11–24, 2015.
X. Chen, H. Tianfield, C. Mei, W. Du, and G. Liu, Biogeography-based learning particle swarm optimization, Soft Computing, vol. 21, no. 24, pp. 7519–7541, 2017.
J. Derrac, S. García, D. Molina, and F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).