Journal Home > Volume 2 , Issue 4

Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.


menu
Abstract
Full text
Outline
About this article

Dual-Stage Hybrid Learning Particle Swarm Optimization Algorithm for Global Optimization Problems

Show Author's information Wei Li1Yangtao Chen1Qian Cai1( )Cancan Wang1Ying Huang2Soroosh Mahmoodi3
School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
School of Mathematical and Computer Science, Gannan Normal University, Ganzhou 341000, China
Soroosh Khorshid Iranian Co., Abyek Industrial Zone, Qazvin 999067, Iran

Abstract

Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.

Keywords: particle swarm optimization, Manhattan distance, example learning, gaussian mutation, dual-stage, global optimization problem

References(37)

[1]

A. Slowik and H. Kwasnicka, Evolutionary algorithms and their applications to engineering problems, Neural Computing and Applications, vol. 32, no. 16, pp. 12363–12379, 2020.

[2]

E. H. Houssein, A. G. Gad, K. Hussain, and P. N. Suganthan, Major advances in particle swarm optimization: Theory, analysis, and application, Swarm and Evolutionary Computation, vol. 63, p. 100868, 2021.

[3]
J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA, USA: MIT Press, 1992.
DOI
[4]

G. D’Angelo and F. Palmieri, GGA: A modified genetic algorithm with gradient-based local search for solving constrained optimization problems, Information Sciences, vol. 547, pp. 136–162, 2021.

[5]
D. Karaboga, An idea based on honey bee swarm for numerical optimization, Tech. Rep. TR06, Erciyes University, Kayseri, Turkey, 2005.
[6]

Bilal, M. Pant, H. Zaheer, L. Garcia-Hernandez, and A. Abraham, Differential evolution: A review of more than two decades of research, Engineering Applications of Artificial Intelligence, vol. 90, p. 103479, 2020.

[7]

W. Li, X. Meng, and Y. Huang, Fitness distance correlation and mixed search strategy for differential evolution, Neurocomputing, vol. 458, pp. 514–525, 2021.

[8]

W. Li, W. Li, and Y. Huang, Enhancing firefly algorithm with dual-population topology coevolution, Mathematics, vol. 10, no. 9, p. 1564, 2022.

[9]
M. Jamil and X. -S. Yang, A literature survey of benchmark functions for global optimization problems, arXiv preprint arXiv: 1308.4008, 2013.
[10]
J. Kennedy and R. Eberhart, Particle swarm optimization, in Proc. ICNN’95 - International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942–1948.
[11]
J. Kennedy, Swarm intelligence, in Handbook of Nature-Inspired and Innovative Computing, A. Y. Zomaya, ed. New York, NY, USA: Springer, 2006, pp. 187–219.
DOI
[12]

H. Liu, X. -W. Zhang, and L. -P. Tu, A modified particle swarm optimization using adaptive strategy, Expert Systems with Applications, vol. 152, p. 113353, 2020.

[13]

M. Taherkhani and R. Safabakhsh, A novel stability-based adaptive inertia weight for particle swarm optimization, Applied Soft Computing, vol. 38, pp. 281–295, 2016.

[14]
M. U. Farooq, A. Ahmad, and A. Hameed, Opposition-based initialization and a modified pattern for inertia weight (IW) in PSO, in Proc. 2017 IEEE International Conference on Innovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 2017, pp. 96–101.
DOI
[15]
I. K. Gupta, A. Choubey, and S. Choubey, Particle swarm optimization with selective multiple inertia weights, in Proc. 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Delhi, India, 2017, pp. 1–6.
DOI
[16]

M. R. Tanweer, S. Suresh, and N. Sundararajan, Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems, Information Sciences, vol. 326, pp. 1–24, 2016.

[17]

W. Li, B. Sun, Y. Huang, and S. Mahmoodi, Adaptive complex network topology with fitness distance correlation framework for particle swarm optimization, International Journal of Intelligent Systems, vol. 37, no. 8, pp. 5217–5247, 2022.

[18]

X. Li, Niching without niching parameters: Particle swarm optimization using a ring topology, IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 150–169, 2009.

[19]

X. Xia, L. Gui, and Z. -H. Zhan, A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting, Applied Soft Computing, vol. 67, pp. 126–140, 2018.

[20]
J. J. Liang and P. N. Suganthan, Dynamic multi-swarm particle swarm optimizer with local search, in Proc. 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2005, pp. 522–528.
[21]

X. Tao, W. Guo, X. Li, Q. He, R. Liu, and J. Zou, Fitness peak clustering based dynamic multi-swarm particle swarm optimization with enhanced learning strategy, Expert Systems with Applications, vol. 191, p. 116301, 2022.

[22]

R. Cheng and Y. Jin, A social learning particle swarm optimization algorithm for scalable optimization, Information Sciences, vol. 291, pp. 43–60, 2015.

[23]

J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.

[24]

B. Liang, Y. Zhao, and Y. Li, A hybrid particle swarm optimization with crisscross learning strategy, Engineering Applications of Artificial Intelligence, vol. 105, p. 104418, 2021.

[25]
Z. H. Zhan and J. Zhang, Orthogonal learning particle swarm optimization for power electronic circuit optimization with free search range, in Proc. 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 2011, pp. 2563–2570.
DOI
[26]

X. Xia, L. Gui, G. He, B. Wei, Y. Zhang, F. Yu, H. Wu, and Z. -H. Zhan, An expanded particle swarm optimization based on multi-exemplar and forgetting ability, Information Sciences, vol. 508, pp. 105–120, 2020.

[27]

Y. J. Gong, J. J. Li, Y. Zhou, Y. Li, H. S. H. Chung, Y. H. Shi, and J. Zhang, Genetic learning particle swarm optimization, IEEE Transactions on Cybernetics, vol. 46, no. 10, pp. 2277–2290, 2015.

[28]

X. Chen, H. Tianfield, and W. Du, Bee-foraging learning particle swarm optimization, Applied Soft Computing, vol. 102, p. 107134, 2021.

[29]

Z. Li, W. Wang, Y. Yan, and Z. Li, PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems, Expert Systems with Applications, vol. 42, no. 22, pp. 8881–8895, 2015.

[30]

J. B. Fisher and R. A. Hinde, Opening of milk bottles by birds, Brit. Birds, vol. 42, pp. 347–357, 1949.

[31]

W. -Y. Chiu, G. G. Yen, and T. -K. Juan, Minimum Manhattan distance approach to multiple criteria decision making in multiobjective optimization problems, IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 972–985, 2016.

[32]
J. J. Liang, B. Qu, P. N. Suganthan, and A. G. Hernández-Díaz, Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization, Tech. Rep. 201212, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, 2013.
[33]

T. Li, J. Shi, W. Deng, and Z. Hu, Pyramid particle swarm optimization with novel strategies of competition and cooperation, Applied Soft Computing, vol. 121, p. 108731, 2022.

[34]

S. M. A. Pahnehkolaei, A. Alfi, and J. T. Machado, Analytical stability analysis of the fractional-order particle swarm optimization algorithm, Chaos,Solitons &Fractals, vol. 155, p. 111658, 2022.

[35]

N. Lynn and P. N. Suganthan, Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm and Evolutionary Computation, vol. 24, pp. 11–24, 2015.

[36]

X. Chen, H. Tianfield, C. Mei, W. Du, and G. Liu, Biogeography-based learning particle swarm optimization, Soft Computing, vol. 21, no. 24, pp. 7519–7541, 2017.

[37]

J. Derrac, S. García, D. Molina, and F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011.

Publication history
Copyright
Rights and permissions

Publication history

Received: 22 July 2022
Revised: 17 August 2022
Accepted: 07 September 2022
Published: 30 December 2022
Issue date: December 2022

Copyright

© The author(s) 2022

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return