Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Reinforcement learning holds promise in enabling robotic tasks as it can learn optimal policies via trial and error. However, the practical deployment of reinforcement learning usually requires human intervention to provide episodic resets when a failure occurs. Since manual resets are generally unavailable in autonomous robots, we propose a reset-free reinforcement learning algorithm based on multi-state recovery and failure prevention to avoid failure-induced resets. The multi-state recovery provides robots with the capability of recovering from failures by self-correcting its behavior in the problematic state and, more importantly, deciding which previous state is the best to return to for efficient re-learning. The failure prevention reduces potential failures by predicting and excluding possible unsafe actions in specific states. Both simulations and real-world experiments are used to validate our algorithm with the results showing a significant reduction in the number of resets and failures during the learning.
V. Verma, G. Gordon, R. Simmons, and S. Thrun, Real-time fault diagnosis [robot fault diagnosis], IEEE Robot. Autom. Mag., vol. 11, no. 2, pp. 56–66, 2004.
S. Lengagne, J. Vaillant, E. Yoshida, and A. Kheddar, Generation of whole-body optimal dynamic multi-contact motions, Int. J. Robot. Res., vol. 32, nos. 9&10, pp. 1104–1119, 2013.
K. Chatzilygeroudis, V. Vassiliades, and J. B. Mouret, Reset-free trial-and-error learning for robot damage recovery, Robot. Auton. Syst., vol. 100, pp. 236–250, 2018.
J. Kober, J. A. Bagnell, and J. Peters, Reinforcement learning in robotics: A survey, Int. J. Robot. Res., vol. 32, no. 11, pp. 1238–1274, 2013.
K. Zhu and T. Zhang, Deep reinforcement learning based mobile robot navigation: A review, Tsinghua Science and Technology, vol. 26, no. 5, pp. 674–691, 2021.
L. Wang, Z. Pan, and J. Wang, A review of reinforcement learning based intelligent optimization for manufacturing scheduling, Complex System Modeling and Simulation, vol. 1, no. 4, pp. 257–270, 2021.
X. Wang, L. Wang, C. Dong, H. Ren, and K. Xing, Reinforcement Learning-Based Dynamic Order Recommendation for On-Demand Food Delivery, Tsinghua Science and Technology, vol. 29, no. 2, pp. 356–367, 2024.
J. Garcıa and F. Fernandez, A comprehensive survey on safe reinforcement learning, J. Mach. Learn. Res., vol. 16, no. 1, pp. 1437–1480, 2015.
P. Geibel and F. Wysotzki, Risk-sensitive reinforcement learning applied to control under constraints, J. Artif. Intell. Res., vol. 24, pp. 81–108, 2005.
Y. Kadota, M. Kurano, and M. Yasuda, Discounted Markov decision processes with utility constraints, Comput. Math. Appl., vol. 51, no. 2, pp. 279–284, 2006.
K. Driessens and S. Džeroski, Integrating guidance into relational reinforcement learning, Mach. Learn., vol. 57, no. 3, pp. 271–304, 2004.
P. Abbeel, A. Coates, and A. Y. Ng, Autonomous helicopter aerobatics through apprenticeship learning, Int. J. Robot. Res., vol. 29, no. 13, pp. 1608–1639, 2010.
J. Morimoto and K. Doya, Robust reinforcement learning, Neural Comput., vol. 17, no. 2, pp. 335–359, 2005.
Y. Li, Y. Tian, E. Tong, W. Niu, Y. Xiang, T. Chen, Y. Wu, and J. Liu, Curricular robust reinforcement learning via GAN-based perturbation through continuously scheduled task sequence, Tsinghua Science and Technology, vol. 28, no. 1, pp. 27–38, 2023.
F. Corbato, On building systems that will fail, Commun. ACM, vol. 34, no. 9, pp. 72–81, 1991.
F. Cherni, M. Boujelben, L. Jaiem, Y. Boutereaa, C. Rekik, and N. Derbel, Autonomous mobile robot navigation based on an integrated environment representation designed in dynamic environments, Int. J. Autom. Control, vol. 11, no. 1, pp. 35–53, 2017.
C. Hu, R. Qiao, Z. Zhang, X. Yan, and M. Li, Dynamic scheduling algorithm based on evolutionary reinforcement learning for sudden contaminant events under uncertain environment, Complex Systems Modeling and Simulation, vol. 2, no. 3, pp. 213–223, 2022.
J. Xin, Y. Qu, F. Zhang, and R. Negenborn, Distributed model predictive contouring control for real-time multi-robot motion planning, Complex System Modeling and Simulation, vol. 2, no. 4, pp. 273–287, 2022.
S. Ishii, W. Yoshida, and J. Yoshimoto, Control of exploitation-exploration meta-parameter in reinforcement learning, Neural Netw., vol. 15, nos. 4–6, pp. 665–687, 2002.
K. Zhang and L. Ning, Hybrid navigation method for multiple robots facing dynamic obstacles, Tsinghua Science and Technology, vol. 27, no. 6, pp. 894–901, 2022.
H. Lu, S. Yang, M. Zhao, and S. Cheng, Multi-robot indoor environment map building based on multi-stage optimization method, Complex System Modeling and Simulation, vol. 1, no. 2, pp. 145–161, 2021.
584
Views
45
Downloads
0
Crossref
0
Web of Science
1
Scopus
0
CSCD
Altmetrics
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).