Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
CNN (convolutional neural network) based real time trackers usually do not carry out online network update in order to maintain rapid tracking speed. This inevitably influences the adaptability to changes in object appearance. Correlation filter based trackers can update the model parameters online in real time. In this paper, we present an end-to-end lightweight network architecture, namely Discriminant Correlation Filter Network (DCFNet). A differentiable DCF (discriminant correlation filter) layer is incorporated into a Siamese network architecture in order to learn the convolutional features and the correlation filter simultaneously. The correlation filter can be efficiently updated online. In previous work, we introduced a joint scale-position space to the DCFNet, forming a scale DCFNet which carries out the predictions of object scale and position simultaneously. We combine the scale DCFNet with the convolutional-deconvolutional network, learning both the high-level embedding space representations and the low-level fine-grained representations for images. The adaptability of the fine-grained correlation analysis and the generalization capability of the semantic embedding are complementary for visual tracking. The back-propagation is derived in the Fourier frequency domain throughout the entire work, preserving the efficiency of the DCF. Extensive evaluations on the OTB (Object Tracking Benchmark) and VOT (Visual Object Tracking Challenge) datasets demonstrate that the proposed trackers have fast speeds, while maintaining tracking accuracy.
Wu Y, Lim J, Yang M H. Object tracking benchmark. IEEE Trans. Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. DOI: 10.1109/TPAMI.2014.2388 226.
Tan K, Wei Z Z. Learning an orientation and scale adaptive tracker with regularized correlation filters. IEEE Access, 2019, 7: 53476–53486. DOI: 10.1109/ACCESS.2019.2912527.
Zhong Z, Yang Z C, Feng W T, Wu W, Hu Y Y, Liu C L. Decision controller for object tracking with deep reinforcement learning. IEEE Access, 2019, 7: 28069–28079. DOI: 10.1109/ACCESS.2019.2900476.
Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409–1422. DOI: 10.1109/TPAMI.2011. 239.
Hare S, Golodetz S, Saffari A, Vineet V, Cheng M M, Hicks S L, Torr P H S. Struck: Structured output tracking with kernels. IEEE Trans. Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096–2109. DOI: 10.1109/TPAMI.2015.2509974.
Henriques J F, Caseiro R, Martins P, Batista J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. DOI: 10.1109/TPAMI.2014.2345390.
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, Huang Z H, Karpathy A, Khosla A, Bernstein M, Berg A C, Fei-Fei L. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211–252. DOI: 10.1007/s11263-015-0816-y.
Gao J, Wang Q, Xing J L, Ling H B, Hu W M, Maybank S. Tracking-by-fusion via Gaussian process regression extended to transfer learning. IEEE Trans. Pattern Analysis and Machine Intelligence, 2020, 42(4): 939–955. DOI: 10.1109/TPAMI.2018.2889070.
Li A N, Lin M, Wu Y, Yang M H, Yan S C. NUS-PRO: A new visual tracking challenge. IEEE Trans. Pattern Analysis and Machine Intelligence, 2016, 38(2): 335–349. DOI: 10.1109/TPAMI.2015.2417577.
Liang P P, Blasch E, Ling H B. Encoding color information for visual tracking: Algorithms and benchmark. IEEE Trans. Image Processing, 2015, 24(12): 5630–5644. DOI: 10.1109/TIP.2015.2482905.
Danelljan M, Häger G, Khan F S, Felsberg M. Discriminative scale space tracking. IEEE Trans. Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561–1575. DOI: 10.1109/TPAMI.2016.2609928.
Gordon D, Farhadi A, Fox D. Re3: Real-time recurrent regression networks for visual tracking of generic objects. IEEE Robotics and Automation Letters, 2018, 3(2): 788–795. DOI: 10.1109/LRA.2018.2792152.