Sort:
Open Access Issue
Evolutionary Experience-Driven Particle Swarm Optimization with Dynamic Searching
Complex System Modeling and Simulation 2023, 3 (4): 307-326
Published: 07 December 2023
Downloads:49

Particle swarm optimization (PSO) algorithms have been successfully used for various complex optimization problems. However, balancing the diversity and convergence is still a problem that requires continuous research. Therefore, an evolutionary experience-driven particle swarm optimization with dynamic searching (EEDSPSO) is proposed in this paper. For purpose of extracting the effective information during population evolution, an adaptive framework of evolutionary experience is presented. And based on this framework, an experience-based neighborhood topology adjustment (ENT) is used to control the size of the neighborhood range, thereby effectively keeping the diversity of population. Meanwhile, experience-based elite archive mechanism (EEA) adjusts the weights of elite particles in the late evolutionary stage, thus enhancing the convergence of the algorithm. In addition, a Gaussian crisscross learning strategy (GCL) adopts cross-learning method to further balance the diversity and convergence. Finally, extensive experiments use the CEC2013 and CEC2017. The experiment results show that EEDSPSO outperforms current excellent PSO variants.

Open Access Issue
Dual-Stage Hybrid Learning Particle Swarm Optimization Algorithm for Global Optimization Problems
Complex System Modeling and Simulation 2022, 2 (4): 288-306
Published: 30 December 2022
Downloads:42

Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.

Open Access Issue
Adaptive Dimensional Learning with a Tolerance Framework for the Differential Evolution Algorithm
Complex System Modeling and Simulation 2022, 2 (1): 59-77
Published: 30 March 2022
Downloads:774

The Differential Evolution (DE) algorithm, which is an efficient optimization algorithm, has been used to solve various optimization problems. In this paper, adaptive dimensional learning with a tolerance framework for DE is proposed. The population is divided into an elite subpopulation, an ordinary subpopulation, and an inferior subpopulation according to the fitness values. The ordinary and elite subpopulations are used to maintain the current evolution state and to guide the evolution direction of the population, respectively. The inferior subpopulation learns from the elite subpopulation through the dimensional learning strategy. If the global optimum is not improved in a specified number of iterations, a tolerance mechanism is applied. Under the tolerance mechanism, the inferior and elite subpopulations implement the restart strategy and the reverse dimensional learning strategy, respectively. In addition, the individual status and algorithm status are used to adaptively adjust the control parameters. To evaluate the performance of the proposed algorithm, six state-of-the-art DE algorithm variants are compared on the benchmark functions. The results of the simulation show that the proposed algorithm outperforms other variant algorithms regarding function convergence rate and solution accuracy.

total 3