AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Key-Part Attention Retrieval for Robotic Object Recognition

State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
Show Author Information

Abstract

The ability to recognize novel objects with a few visual samples is critical in the robotic applications. Existing methods mainly concern the recognition of inter-category objects, however, the object recognition from different sub-classes within the same category remains challenging due to their similar appearances. In this paper, we propose a key-part attention retrieval solution to distinguish novel objects of different sub-classes according to a few samples without re-training. Especially, an object encoder, including convolutional neural network with attention and key-part aggregation, is designed to generate object attention map and extract the object-level embedding, where object attention map from the middle stage of the backbone is used to guide the key-part aggregation. Besides, to overcome the non-differentiability drawback of key-part attention, the object encoder is trained in a two-step scheme, and a more stable object-level embedding is obtained. On this basis, the potential objects are located from a scene image by mining connected domains of the attention map. By matching the embedding of each potential object and embeddings from support data, the recognition of the potential objects is achieved. The effectiveness of the proposed method is verified by experiments.

References

[1]
M. Denninger and R. Triebel, Persistent anytime learning of objects from unseen classes, in Proc. 2018 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Madrid, Spain, 2018, pp. 4075–4082.
[2]

B. Xiong and X. Ding, A generic object detection using a single query image without training, Tsinghua Science and Technology, vol. 17, no. 2, pp. 194–201, 2012.

[3]

L. Sun, J. Ma, and L. Jing, Object counting using a refinement network, Tsinghua Science and Technology, vol. 27, no. 5, pp. 869–879, 2022.

[4]

S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017.

[5]
J. Johnson, A. Karpathy, and F. F. Li, DenseCap: Fully convolutional localization networks for dense captioning, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 4565–4574.
[6]

R. Xin, J. Zhang, and Y. Shao, Complex network classification with convolutional neural network, Tsinghua Science and Technology, vol. 25, no. 4, pp. 447–457, 2020.

[7]

M. H. Haghighat and J. Li, Intrusion detection system using voting-based neural network, Tsinghua Science and Technology, vol. 26, no. 4, pp. 484–495, 2021.

[8]
A. Ayub and A. R. Wagner, Tell me what this is: Few-shot incremental object learning by a robot, in Proc. 2020 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Las Vegas, NV, USA, 2020, pp. 8344–8350.
[9]
M. O. Turkoglu, F. B. Ter Haar, and N. van der Stap, Incremental learning-based adaptive object recognition for mobile robots, in Proc. 2018 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Madrid, Spain, 2018, pp. 6263–6268.
[10]
M. Dehghan, Z. Zhang, M. Siam, J. Jin, L. Petrich, and M. Jagersand, Online object and task learning via human robot interaction, in Proc. 2019 Int. Conf. Robotics and Automation, Montreal, Canada, 2019, pp. 2132–2138.
[11]
S. Valipour, C. Perez, and M. Jagersand, Incremental learning for robot perception through HRI, in Proc. 2017 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 2772–2777.
[12]

Y. Shinagawa, Homotopic image pseudo-invariants for openset object recognition and image retrieval, IEEE Trans. Pattern Anal. Mach. Intel., vol. 30, no. 11, pp. 1891–1901, 2008.

[13]
H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson, From generic to specific deep representations for visual recognition, in Proc. 2015 IEEE Conf. Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 2015, pp. 36–45.
[14]

F. Radenovic, G. Tolias, and O. Chum, Fine-tuning CNN image retrieval with no human annotation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 7, pp. 1655–1668, 2019.

[15]
A. B. Yandex and V. Lempitsky, Aggregating local deep features for image retrieval, in Proc. 2015 IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 1269–1277.
[16]

X. S. Wei, J. H. Luo, J. Wu, and Z. H. Zhou, Selective convolutional descriptor aggregation for fine-grained image retrieval, IEEE Trans. Image Process., vol. 26, no. 6, pp. 2868–2881, 2017.

[17]

L. Ma, X. Li, Y. Shi, J. Wu, and Y. Zhang, Correlation filtering-based hashing for fine-grained image retrieval, IEEE Signal Process. Lett., vol. 27, pp. 2129–2133, 2020.

[18]

Y. Li, Y. Xu, J. Wang, Z. Miao and Y. Zhang, MS-RMAC: Multiscale regional maximum activation of convolutions for image retrieval, IEEE Signal Process. Lett., vol. 24, no. 5, pp. 609–613, 2017.

[19]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.

[20]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556, 2015.
[21]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[22]

D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.

[23]
H. Bay, T. Tuytelaars, and L. Van Gool, SURF: Speeded up robust features, in Proc. 9 th European Conf. Computer Vision, Graz, Austria, 2006, pp. 404–417.
[24]

A. Khan, A. Javed, M. T. Mahmood, M. H. A. Khan, and I. H. Lee, Directional magnitude local hexadecimal patterns: A novel texture feature descriptor for content-based image retrieval, IEEE Access, vol. 9, pp. 135608–135629, 2021.

[25]

F. Tajeripour, M. Saberi, and S. Fekri-Ershad, Developing a novel approach for content based image retrieval using modified local binary patterns and morphological transform, Int. Arab J. Inf. Technol., vol. 12, no. 6, pp. 574–581, 2015.

[26]

N. Kayhan and S. Fekri-Ershad, Content based image retrieval based on weighted fusion of texture and color features derived from modified local binary patterns and local neighborhood difference patterns, Multimed. Tools Appl., vol. 80, no. 21, pp. 32763–32790, 2021.

[27]
G. Tolias, R. Sicre, and H. Jégou, Particular object retrieval with integral max-pooling of CNN activations, in Proc. 4 th Int. Conf. Learning Representations, San Juan, Puerto Rico. doi: 10.48550/arXiv.1511.05879.
[28]
S. S. Husain and M. Bober, REMAP: Multi-layer entropy-guided pooling of dense CNN features for image retrieval, IEEE Trans. Image Process., vol. 28, no. 10, pp. 5201–5213, 2019.
[29]
X. He, Y. Zhou, Z. Zhou, S. Bai, and X. Bai, Triplet-center loss for multi-view 3D object retrieval, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 1945–1954.
[30]

H. Luo, W. Jiang, Y. Gu, F. Liu, X. Liao, S. Lai, and J. Gu, A strong baseline and batch normalization neck for deep person re-identification, IEEE Trans. Multimedia, vol. 22, no. 10, pp. 2597–2609, 2020.

[31]

X. Shu, J. Yang, R. Yan, and Y. Song, Expansion-squeeze-excitation fusion network for elderly activity recognition, IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 8, pp. 5281–5292, 2022.

[32]

X. Shu, L. Zhang, G. J. Qi, W. Liu, and J. Tang, Spatiotemporal co-attention recurrent neural networks for human-skeleton motion prediction, IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 3300–3315, 2022.

[33]
Z. Zhang, C. Lan, W. Zeng, X. Jin, and Z. Chen, Relation-aware global attention for person re-identification, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 3183–3192.
[34]
G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 2261–2269.
[35]

V. Kolmogorov, P. Monasse, and P. Tan, Kolmogorov and Zabih’s graph cuts stereo matching algorithm, Image Process. Line, vol. 4, pp. 220–251, 2014.

[36]
A. Y. Ng, M. I. Jordan, and Y. Weiss, On spectral clustering: Analysis and an algorithm, in Proc. 14 th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2011, pp. 849–856.
[37]
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, The Caltech-UCSD Birds-200–2011 Dataset. Pasadena, CA, USA: California Institute of Technology, 2011.
[38]
J. Krause, M. Stark, J. Deng, and F. F. Li, 3D object representations for fine-grained categorization, in Proc. 2013 IEEE Int. Conf. Computer Vision Workshops, Sydney, Australia, 2013, pp. 554–561.
[39]

H. Jegou, M. Douze, and C. Schmid, Improving bag-of-features for large scale image search, Int. J. Comput. Vis., vol. 87, no. 3, pp. 316–336, 2010.

[40]
T. Weyand, A. Araujo, B. Cao, and J. Sim, Google landmarks dataset v2-A large-scale benchmark for instance-level recognition and retrieval, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 2572–2581.
[41]
The INRIA Holidays dataset, https://lear.inrialpes.fr/~jegou/data.php’, 2022.
[42]
X. Wang, R. Girshick, A. Gupta, and K. He, Non-local neural networks, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7794–7803.
Tsinghua Science and Technology
Pages 644-655
Cite this article:
Liu J, Cao Z, Tang Y. Key-Part Attention Retrieval for Robotic Object Recognition. Tsinghua Science and Technology, 2024, 29(3): 644-655. https://doi.org/10.26599/TST.2023.9010022

331

Views

41

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 06 September 2022
Revised: 09 January 2023
Accepted: 21 March 2023
Published: 04 December 2023
© The Author(s) 2024.

The articles published in this open access journal are distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return