Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
University of Chinese Academy of Sciences, Beijing 100049, China
Abstract
With the continuous development of robotics and artificial intelligence, robots are being increasingly used in various applications. For traditional navigation algorithms, such as Dijkstra and A, many dynamic scenarios in life are difficult to cope with. To solve the navigation problem of complex dynamic scenes, we present an improved reinforcement-learning-based algorithm for local path planning that allows it to perform well even when more dynamic obstacles are present. The method applies the gmapping algorithm as the upper layer input and uses reinforcement learning methods as the output. The algorithm enhances the robots’ ability to actively avoid obstacles while retaining the adaptability of traditional methods.