AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (12.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Differential Evolution with Joint Adaptation of Mutation Strategies and Control Parameters via Distributed Proximal Policy Optimization

State Key Labortary of Mechanical Transmission for Advanced Equipment and School of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China
School of Computer Science, China University of Geosciences, Wuhan 430078, China
State Key Laboratory of Mechanical Transmission for Advanced Equipment and School of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China, and also with State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, China
Show Author Information

Abstract

The mutation operations and related control parameters play important roles in the performance of the differential evolution algorithm. Learning optimal policies for these strategies and parameters through reinforcement learning is a hot topic. However, most of the current studies focus on either mutation strategy selection or the control parameters alone while the others keep fixed or self-adaptive, resulting in deteriorated performances. To address this gap, this paper proposes a framework for the joint adaptation of mutation strategies and related control parameters based on deep reinforcement learning. In this method, the distributed proximal policy optimization algorithm is employed to train the agents to dynamically select the optimal combination of mutation strategies and control parameters. To enhance the agent’s learning of the optimal policy, information derived from fitness landscape analysis is incorporated into the state representations. The training is conducted on the black-box optimization benchmark test problems, which are capable of generating large-scale test instances. Numerical results on the new problems from CEC2013 and CEC2017 test suites, and the real-world application of rover trajectory planning demonstrate that the proposed approach achieves competitive performance compared to state-of-the-art methods. The adaptation behavior and the contribution of learning are also thoroughly analyzed.

Electronic Supplementary Material

Download File(s)
TST-2024-9010185_ESM.pdf (453.3 KB)

References

【1】
【1】
 
 
Tsinghua Science and Technology
Pages 101-124

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Ding W, Qian M, Lu C, et al. Differential Evolution with Joint Adaptation of Mutation Strategies and Control Parameters via Distributed Proximal Policy Optimization. Tsinghua Science and Technology, 2026, 31(1): 101-124. https://doi.org/10.26599/TST.2024.9010185
Part of a topical collection:

2470

Views

215

Downloads

1

Crossref

2

Web of Science

0

Scopus

0

CSCD

Received: 19 April 2024
Revised: 26 September 2024
Accepted: 29 September 2024
Published: 25 August 2025
© The author(s) 2026.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).