AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Hybrid Navigation Method for Multiple Robots Facing Dynamic Obstacles

Kaidong ZhaoLi Ning( )
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
University of Chinese Academy of Sciences, Beijing 100049, China
Show Author Information

Abstract

With the continuous development of robotics and artificial intelligence, robots are being increasingly used in various applications. For traditional navigation algorithms, such as Dijkstra and A *, many dynamic scenarios in life are difficult to cope with. To solve the navigation problem of complex dynamic scenes, we present an improved reinforcement-learning-based algorithm for local path planning that allows it to perform well even when more dynamic obstacles are present. The method applies the gmapping algorithm as the upper layer input and uses reinforcement learning methods as the output. The algorithm enhances the robots’ ability to actively avoid obstacles while retaining the adaptability of traditional methods.

References

[1]
B. Wu, Q. Fu, J. Liang, P. Qu, X. Q. Li, L. Wang, W. Liu, W. Yang, and Y. S. Liu, Hierarchical macro strategy model for MOBA game AI, in Proc. the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 2019, pp. 12061213.
[2]
P. Gil and L. Nunes, Hierarchical reinforcement learning using path clustering, presented at 8th Iberian Conference on Information Systems and Technologies (CISTI), Lisbon, Portugal, 2013.
[3]
K. Zhu and T. Zhang, Deep reinforcement learning based mobile robot navigation: A review, Tsinghua Science and Technology, vol. 26, no. 5, pp. 674691, 2021.
[4]
G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant, Conflict-based search for optimal multi-agent pathfinding, Artificial Intelligence, vol. 219, pp. 4066, 2015.
[5]
S. Koenig and M. Likhachev, Fast replanning for navigation in unknown terrain, IEEE Transactions on Robotics, vol. 21, no. 3, pp. 354363, 2005.
[6]
F. Y. L. Chin, H. F. Ting, and Y. Zhang, Variablesize rectangle covering, in Proc. International Conference on Combinatorial Optimization and Applications, Huangshan, China, 2009, pp. 145154.
[7]
Y. Zhang, Q. Ge, R. Fleischer, T. Jiang, and H. Zhu, Approximating the minimum weight weak vertex cover, Theoretical Computer Science, vol. 363, no. 1, pp. 99105, 2006.
[8]
J. Snape, J. V. D. Berg, S. J. Guy, and D. Manocha, The hybrid reciprocal velocity obstacle, IEEE Transactions on Robotics, vol. 27, no. 4, pp. 696706, 2011.
[9]
J. V. D. Berg, S. J. Guy, M. Lin, and D. Manocha, Reciprocal n-body collision avoidance, in Robotics Research, Springer Tracts in Advanced Robotics, C. Pradalier, R. Siegwart, and G. Hirzinger, eds. Berlin, Germany: Springer, 2011, pp. 319.
[10]
W. Hess, D. Kohler, H. Rapp, and D. Andor, Real-time loop closure in 2D LIDAR SLAM, in Proc. IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 12711278.
[11]
D. Hennes, D. Claes, W. Meeussen, and K. Tuyls, Multi-robot collision avoidance with localization uncertainty, in Proc. the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Valencia, Spain, 2012, pp. 147154.
[12]
F. Duchoň, A. Babinec, M. Kajan, P. Beňo M. Florek, T. Fico, and L. Jurišica, Path planning with modified a star algorithm for a mobile robot, Procedia Engineering, vol. 96, pp. 5969, 2014.
[13]
L. Chen, N. Ma, P. Wang, J. Li, P. Wang, G. Pang, and X. Shi, Survey of pedestrian action recognition techniques for autonomous driving, Tsinghua Science and Technology, vol. 25, no. 4, pp. 458470, 2020.
[14]
K. Konolige, E. Marder-Eppstein, and B. Marthi, Navigation in hybrid metric-topological maps, in Proc. IEEE International Conference on Robotics and Automation, Shanghai, China, 2011, pp. 30413047.
[15]
M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, ROS: An open-source robot operating system, presented at ICRA Workshop on Open Source Software, Kobe, Japan, 2009.
[16]
A. Lopes, J. Rodrigues, J. Perdigao, G. Pires, and U. Nunes, A new hybrid motion planner: Applied in a brain-actuated robotic wheelchair, IEEE Robotics & Automation Magazine, vol. 23, no. 4, pp. 8293, 2016.
[17]
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 1998.
[18]
F. Tao, J. Cheng, Q. Qi, M. Zhang, H. Zhang, and F. Sui, Digital twin-driven product design, manufacturing and service with big data, The International Journal of Advanced Manufacturing Technology, vol. 94, nos. 9–12, pp. 35633576, 2018.
[19]
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, Continuous control with deep reinforcement learning, presented at the International Conference on Learning Representations (ICLR), San Juan, PR, USA, 2016.
[20]
J. Peters and S. Schaal, Natural actor-critic, Neurocomputing, vol. 71, nos. 7–9, pp. 11801190, 2008.
[21]
P. Long, T. Fan, X. Liao, W. Liu, H. Zhang, and J. Pan, Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning, in Proc. 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018, pp. 62526259.
Tsinghua Science and Technology
Pages 894-901
Cite this article:
Zhao K, Ning L. Hybrid Navigation Method for Multiple Robots Facing Dynamic Obstacles. Tsinghua Science and Technology, 2022, 27(6): 894-901. https://doi.org/10.26599/TST.2021.9010073

764

Views

99

Downloads

9

Crossref

7

Web of Science

9

Scopus

0

CSCD

Altmetrics

Received: 11 June 2021
Revised: 22 August 2021
Accepted: 02 September 2021
Published: 21 June 2022
© The author(s) 2022.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return