References(21)
[1]
B. Wu, Q. Fu, J. Liang, P. Qu, X. Q. Li, L. Wang, W. Liu, W. Yang, and Y. S. Liu, Hierarchical macro strategy model for MOBA game AI, in Proc. the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 2019, pp. 1206–1213.
[2]
P. Gil and L. Nunes, Hierarchical reinforcement learning using path clustering, presented at 8th Iberian Conference on Information Systems and Technologies (CISTI), Lisbon, Portugal, 2013.
[3]
K. Zhu and T. Zhang, Deep reinforcement learning based mobile robot navigation: A review, Tsinghua Science and Technology, vol. 26, no. 5, pp. 674–691, 2021.
[4]
G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant, Conflict-based search for optimal multi-agent pathfinding, Artificial Intelligence, vol. 219, pp. 40–66, 2015.
[5]
S. Koenig and M. Likhachev, Fast replanning for navigation in unknown terrain, IEEE Transactions on Robotics, vol. 21, no. 3, pp. 354–363, 2005.
[6]
F. Y. L. Chin, H. F. Ting, and Y. Zhang, Variablesize rectangle covering, in Proc. International Conference on Combinatorial Optimization and Applications, Huangshan, China, 2009, pp. 145–154.
[7]
Y. Zhang, Q. Ge, R. Fleischer, T. Jiang, and H. Zhu, Approximating the minimum weight weak vertex cover, Theoretical Computer Science, vol. 363, no. 1, pp. 99–105, 2006.
[8]
J. Snape, J. V. D. Berg, S. J. Guy, and D. Manocha, The hybrid reciprocal velocity obstacle, IEEE Transactions on Robotics, vol. 27, no. 4, pp. 696–706, 2011.
[9]
J. V. D. Berg, S. J. Guy, M. Lin, and D. Manocha, Reciprocal n-body collision avoidance, in Robotics Research, Springer Tracts in Advanced Robotics, C. Pradalier, R. Siegwart, and G. Hirzinger, eds. Berlin, Germany: Springer, 2011, pp. 3–19.
[10]
W. Hess, D. Kohler, H. Rapp, and D. Andor, Real-time loop closure in 2D LIDAR SLAM, in Proc. IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 1271–1278.
[11]
D. Hennes, D. Claes, W. Meeussen, and K. Tuyls, Multi-robot collision avoidance with localization uncertainty, in Proc. the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Valencia, Spain, 2012, pp. 147–154.
[12]
F. Duchoň, A. Babinec, M. Kajan, P. Beňo M. Florek, T. Fico, and L. Jurišica, Path planning with modified a star algorithm for a mobile robot, Procedia Engineering, vol. 96, pp. 59–69, 2014.
[13]
L. Chen, N. Ma, P. Wang, J. Li, P. Wang, G. Pang, and X. Shi, Survey of pedestrian action recognition techniques for autonomous driving, Tsinghua Science and Technology, vol. 25, no. 4, pp. 458–470, 2020.
[14]
K. Konolige, E. Marder-Eppstein, and B. Marthi, Navigation in hybrid metric-topological maps, in Proc. IEEE International Conference on Robotics and Automation, Shanghai, China, 2011, pp. 3041–3047.
[15]
M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, ROS: An open-source robot operating system, presented at ICRA Workshop on Open Source Software, Kobe, Japan, 2009.
[16]
A. Lopes, J. Rodrigues, J. Perdigao, G. Pires, and U. Nunes, A new hybrid motion planner: Applied in a brain-actuated robotic wheelchair, IEEE Robotics & Automation Magazine, vol. 23, no. 4, pp. 82–93, 2016.
[17]
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 1998.
[18]
F. Tao, J. Cheng, Q. Qi, M. Zhang, H. Zhang, and F. Sui, Digital twin-driven product design, manufacturing and service with big data, The International Journal of Advanced Manufacturing Technology, vol. 94, nos. 9–12, pp. 3563–3576, 2018.
[19]
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, Continuous control with deep reinforcement learning, presented at the International Conference on Learning Representations (ICLR), San Juan, PR, USA, 2016.
[20]
J. Peters and S. Schaal, Natural actor-critic, Neurocomputing, vol. 71, nos. 7–9, pp. 1180–1190, 2008.
[21]
P. Long, T. Fan, X. Liao, W. Liu, H. Zhang, and J. Pan, Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning, in Proc. 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018, pp. 6252–6259.