707
Views
135
Downloads
0
Crossref
N/A
WoS
N/A
Scopus
N/A
CSCD
Aiming at the problem of poor tracking robustness caused by severe occlusion, deformation, and object rotation of deep learning object tracking algorithm in complex scenes, an improved deep reinforcement learning object tracking algorithm based on actor-double critic network is proposed. In offline training phase, the actor network moves the rectangular box representing the object location according to the input sequence image to obtain the action value, that is, the horizontal, vertical, and scale transformation of the object. Then, the designed double critic network is used to evaluate the action value, and the output double Q value is averaged to guide the actor network to optimize the tracking strategy. The design of double critic network effectively improves the stability and convergence, especially in challenging scenes such as object occlusion, and the tracking performance is significantly improved. In online tracking phase, the well-trained actor network is used to infer the changing action of the bounding box, directly causing the tracker to move the box to the object position in the current frame. Several comparative tracking experiments were conducted on the OTB100 visual tracker benchmark and the experimental results show that more intensive reward settings significantly increase the actor network’s output probability of positive actions. This makes the tracking algorithm proposed in this paper outperforms the mainstream deep reinforcement learning tracking algorithms and deep learning tracking algorithms under the challenging attributes such as occlusion, deformation, and rotation.
Aiming at the problem of poor tracking robustness caused by severe occlusion, deformation, and object rotation of deep learning object tracking algorithm in complex scenes, an improved deep reinforcement learning object tracking algorithm based on actor-double critic network is proposed. In offline training phase, the actor network moves the rectangular box representing the object location according to the input sequence image to obtain the action value, that is, the horizontal, vertical, and scale transformation of the object. Then, the designed double critic network is used to evaluate the action value, and the output double Q value is averaged to guide the actor network to optimize the tracking strategy. The design of double critic network effectively improves the stability and convergence, especially in challenging scenes such as object occlusion, and the tracking performance is significantly improved. In online tracking phase, the well-trained actor network is used to infer the changing action of the bounding box, directly causing the tracker to move the box to the object position in the current frame. Several comparative tracking experiments were conducted on the OTB100 visual tracker benchmark and the experimental results show that more intensive reward settings significantly increase the actor network’s output probability of positive actions. This makes the tracking algorithm proposed in this paper outperforms the mainstream deep reinforcement learning tracking algorithms and deep learning tracking algorithms under the challenging attributes such as occlusion, deformation, and rotation.
D. Liu, J. Zhao, A. Xi, C. Wang, X. Huang, K. Lai, and C. Liu, Data augmentation technology driven by image style transfer in self-driving car based on end-to-end learning, Comput. Model. Eng. Sci., vol. 122, no. 2, pp. 593–617, 2020.
I. Paulo Canal, M. Martin Pérez Reimbold, and M. de Campos, Ziegler–Nichols customization for quadrotor attitude control under emptyand full loading conditions, Comput. Model. Eng. Sci., vol. 125, no. 1, pp. 65–75, 2020.
J. Yang, S. Liu, H. Su, and Y. Tian, Driving assistance system based on data fusion of multisource sensors for autonomous unmanned ground vehicles, Comput. Netw., vol. 192, p. 108053, 2021.
D. Guo, Q. Yang, Y. D. Zhang, G. Zhang, M. Zhu, and J. Yuan, Adaptive object tracking discriminate model for multi-camera panorama surveillance in airport apron, Comput. Model. Eng. Sci., vol. 129, no. 1, pp. 191–205, 2021.
W. Tian, M. Lauer, and L. Chen, Online multi-object tracking using joint domain information in traffic scenarios, IEEE Trans. Intell. Transp. Syst., vol. 21, no. 1, pp. 374–384, 2020.
M. Adimoolam, S. Mohan, A. John, and G. Srivastava, A novel technique to detect and track multiple objects in dynamic video surveillance systems, Int. J. Interact. Multimed. Artif. Intell., vol. 7, no. 4, p. 112, 2022.
A. A. Gumbs, I. Frigerio, G. Spolverato, R. Croner, A. Illanes, E. Chouillard, and E. Elyan, Artificial intelligence surgery: How do we get to autonomous actions in surgery? Sensors, vol. 21, no. 16, p. 5526, 2021.
S. Ipsen, S. Böttger, H. Schwegmann, and F. Ernst, Target tracking accuracy and latency with different 4D ultrasound systems–a robotic phantom study, Curr. Dir. Biomed. Eng., vol. 6, no. 1, p. 20200038, 2020.
B. Yang, X. Liu, X. Tang, and X. Chen, AGV Multi-target tracking under smart factory, Electron. Sci. Technol., vol. 32, no. 11, pp. 23–27, 2019.
Z. Xia, J. Du, J. Wang, C. Jiang, Y. Ren, G. Li, and Z. Han, Multi-agent reinforcement learning aided intelligent UAV swarm for target tracking, IEEE Trans. Veh. Technol., vol. 71, no. 1, pp. 931–945, 2022.
S. Zhang, Y. Li, and Q. Dong, Autonomous navigation of UAV in multi-obstacle environments based on a Deep Reinforcement Learning approach, Appl. Soft Comput., vol. 115, p. 108194, 2022.
S. Liu, J. Cao, Y. Wang, W. Chen, and Y. Liu, Self-play reinforcement learning with comprehensive critic in computer games, Neurocomputing, vol. 449, pp. 207–213, 2021.
S. Wen, Z. Wen, D. Zhang, H. Zhang, and T. Wang, A multi-robot path-planning algorithm for autonomous navigation using meta-reinforcement learning based on transfer learning, Appl. Soft Comput., vol. 110, p. 107605, 2021.
J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, High-speed tracking with kernelized correlation filters, IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, pp. 583–596, 2015.
Z. Chi, H. Li, H. Lu, and M. H. Yang, Dual deep network for visual tracking, IEEE Trans. Image Process., vol. 26, no. 4, pp. 2005–2015, 2017.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., ImageNet large scale visual recognition challenge, Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
Y. Wu, J. Lim, and M. H. Yang, Object tracking benchmark, IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1834–1848, 2015.
This work was supported in part by the National Key R&D Program of China (No. 2022YFB2602203), and in part by the National Natural Science Foundation of China (Nos. U20A20225 and 61873200) and Shaanxi Provincial Key Research and Development Program (No. 2022-GY111).
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).