Journal Home > Volume 10 , Issue 2

Hierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously. Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities, and also insufficiently consider relationships between the hierarchical multi-granularity labels. We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation (MGSG) approach for the hierarchical multi-granularity image classification task. Specifically, we introduce a transformer architecture to encode the image into visual representation sequences. Next, we traverse the taxonomic tree and organize the multi-granularity labels into sequences, and vectorize them and add positional information. The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs, and outputs the predicted multi-granularity label sequence. The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism, and relates visual information to the semantic label information through a cross-modality attention mechanism. In this way, the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities. Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method. Our project is available at https://github.com/liuxindazz/mgsg.


menu
Abstract
Full text
Outline
About this article

Multi-granularity sequence generation for hierarchical image classification

Show Author's information Xinda Liu1Lili Wang1,2( )
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Peng Cheng Laboratory, Shengzhen 518000, China

Abstract

Hierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously. Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities, and also insufficiently consider relationships between the hierarchical multi-granularity labels. We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation (MGSG) approach for the hierarchical multi-granularity image classification task. Specifically, we introduce a transformer architecture to encode the image into visual representation sequences. Next, we traverse the taxonomic tree and organize the multi-granularity labels into sequences, and vectorize them and add positional information. The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs, and outputs the predicted multi-granularity label sequence. The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism, and relates visual information to the semantic label information through a cross-modality attention mechanism. In this way, the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities. Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method. Our project is available at https://github.com/liuxindazz/mgsg.

Keywords: hierarchical multi-granularity classification, vision and text transformer, sequence generation, fine-grained image recognition, cross-modality attention

References(65)

[1]
Niu, K.; Huang, Y.; Ouyang, W. L.; Wang, L. Improving description-based person re-identification by multi-granularity image-text alignments. IEEE Transactions on Image Processing Vol. 29, 5542–5556, 2020.
[2]
Du, R. Y.; Chang, D. L.; Bhunia, A. K.; Xie, J. Y.; Ma, Z. Y.; Song, Y. Z.; Guo, J. Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12365. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 153–168, 2020.
DOI
[3]
Liu, D. Y.; Wu, L.; Zheng, F.; Liu, L. Q.; Wang, M. Verbal-person nets: Pose-guided multi-granularity language-to-person generation. IEEE Transactions on Neural Networks and Learning Systems , 2022.
[4]
Ren, Y. X.; Wu, J.; Xiao, X. F.; Yang, J. C. Online multi-granularity distillation for GAN compression. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 6773–6783, 2021.
DOI
[5]
Chen, T. S.; Wu, W. X.; Gao, Y. F.; Dong, L.; Luo, X. N.; Lin, L. Fine-grained representation learning and recognition by exploiting hierarchical semantic embedding. In: Proceedings of the 26th ACM International Conference on Multimedia, 2023–2031, 2018.
DOI
[6]
Chang, D. L.; Pang, K. Y.; Zheng, Y. X.; Ma, Z. Y.; Song, Y. Z.; Guo, J. Your “flamingo” is my “bird”: Fine-grained, or not. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11471–11480, 2021.
DOI
[7]
Wang, R. Z.; cai, D.; Xiao, K. W.; Jia, X. X.; Han, X.; Meng, D. Y. Label hierarchy transition: Modeling class hierarchies to enhance deep classifiers. arXiv preprint arXiv:2112.02353, 2021.
[8]
Silla, C. N.; Freitas, A. A. A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery Vol. 22, Nos. 1–2, 31–72, 2011.
[9]
Rousu, J.; Saunders, C.; Szedmak, S.; Shawe-Taylor, J. Kernel-based learning of hierarchical multilabel classification models. Journal of Machine Learning Research Vol. 7, 1601–1626, 2006.
[10]
Cesa-Bianchi, N.; Gentile, C.; Zaniboni, L. Incremental algorithms for hierarchical classification. Journal of Machine Learning Research Vol. 7, 31–54, 2006.
[11]
Triguero, I.; Vens, C. Labelling strategies for hierarchical multi-label classification techniques. Pattern Recognition Vol. 56, 170–183, 2016.
[12]
Barutcuoglu, Z.; Schapire, R. E.; Troyanskaya, O. G. Hierarchical multi-label prediction of gene function. Bioinformatics Vol. 22, No. 7, 830–836, 2006.
[13]
Dimitrovski, I.; Kocev, D.; Loskovska, S.; Džeroski, S. Hierarchical annotation of medical images. Pattern Recognition Vol. 44, Nos. 10–11, 2436–2449, 2011.
[14]
Chen, T. S.; Lin, L.; Chen, R. Q.; Hui, X. L.; Wu, H. F. Knowledge-guided multi-label few-shot learning for general image recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, No. 3, 1371–1384, 2022.
[15]
Li, L. L.; Zhou, T. F.; Wang, W. G.; Li, J. W.; Yang, Y. Deep hierarchical semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1236–1247, 2022.
DOI
[16]
Chen, H. T.; Wang, Y.; Hu, Q. H. Multi-granularity regularized re-balancing for class incremental learning. IEEE Transactions on Knowledge and Data Engineering Vol. 35, No. 7, 7263–7277, 2023.
[17]
Wang, Y.; Hu, Q. H.; Zhu, P. F.; Li, L. H.; Lu, B. X.; Garibaldi, J. M.; Li, X. L. Deep fuzzy tree for large-scale hierarchical visual classification. IEEE Transactions on Fuzzy Systems Vol. 28, No. 7, 1395–1406, 2020.
[18]
Wang, Y.; Wang, Z.; Hu, Q. H.; Zhou, Y. C.; Su, H. L. Hierarchical semantic risk minimization for large-scale classification. IEEE Transactions on Cybernetics Vol. 52, No. 9, 9546–9558, 2022.
[19]
Wang, Y.; Hu, Q. H.; Chen, H.; Qian, Y. H. Uncertainty instructed multi-granularity decision for large-scale hierarchical classification. Information Sciences Vol. 586, 644–661, 2022.
[20]
Min, W. Q.; Jiang, S. Q.; Liu, L. H.; Rui, Y.; Jain, R. A survey on food computing. ACM Computing Surveys Vol. 52, No. 5, Article No. 92, 2019.
[21]
Ge, W. F.; Lin, X. R.; Yu, Y. Z. Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3029–3038, 2019.
DOI
[22]
Jiang, S. Q.; Min, W. Q.; Liu, L. H.; Luo, Z. D. Multi-scale multi-view deep feature aggregation for food recognition. IEEE Transactions on Image Processing Vol. 29, 265–276, 2020.
[23]
Lin, T. Y.; RoyChowdhury, A.; Maji, S. Bilinear convolutional neural networks for fine-grained visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 6, 1309–1322, 2018.
[24]
Chen, Y.; Bai, Y. L.; Zhang, W.; Mei, T. Destruction and construction learning for fine-grained image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5152–5161, 2019.
DOI
[25]
Sun, G. L.; Cholakkal, H.; Khan, S.; Khan, F.; Shao, L. Fine-grained recognition: Accounting for subtle differences between similar classes. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, No. 7, 12047–12054, 2020.
[26]
Zhuang, P. Q.; Wang, Y. L.; Qiao, Y. Learning attentive pairwise interaction for fine-grained classification. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, No. 7, 13130–13137, 2020.
[27]
Zou, D. N.; Zhang, S. H.; Mu, T. J.; Zhang, M. A new dataset of dog breed images and a benchmark for finegrained classification. Computational Visual Media Vol. 6, No. 4, 477–487, 2020.
[28]
Chen, L.; Yang, M. Semi-supervised dictionary learning with label propagation for image classification. Computational Visual Media Vol. 3, No. 1, 83–94, 2017.
[29]
Chen, K. X.; Wu, X. J. Component SPD matrices: A low-dimensional discriminative data descriptor for image set classification. Computational Visual Media Vol. 4, No. 3, 245–252, 2018.
[30]
Ren, J. Y.; Wu, X. J. Vectorial approximations of infinite-dimensional covariance descriptors for image classification. Computational Visual Media Vol. 3, No. 4, 379–385, 2017.
[31]
Huang, S. L.; Xu, Z.; Tao, D. C.; Zhang, Y. Part-stacked CNN for fine-grained visual categorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1173–1182, 2016.
DOI
[32]
Donahue, J.; Jia, Y. Q.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. DeCAF: A deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning, Vol. 32, 647–655, 2014.
[33]
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6000–6010, 2017.
[34]
Guo, M. H.; Xu, T. X.; Liu, J. J.; Liu, Z. N.; Jiang, P. T.; Mu, T. J.; Zhang, S. H.; Martin, R. R.; Cheng, M. M.; Hu, S. M. Attention mechanisms in computer vision: A survey. Computational Visual Media Vol. 8, No. 3, 331–368, 2022.
[35]
Devlin, J.; Chang, M. W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the Conference of the Association for Computational Linguistics, 4171–4186, 2019.
[36]
Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, 1877–1901, 2020.
[37]
Wang, X. L.; Girshick, R.; Gupta, A.; He, K. M. Non-local neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7794–7803, 2018.
DOI
[38]
Cao, Y.; Xu, J. R.; Lin, S.; Wei, F. Y.; Hu, H. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop, 1971–1980, 2019.
DOI
[39]
Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. H. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7132–7141, 2018.
DOI
[40]
Wang, Q. L.; Wu, B. G.; Zhu, P. F.; Li, P. H.; Zuo, W. M.; Hu, Q. H. ECA-net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11531–11539, 2020.
DOI
[41]
Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X. H.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. In: Proceedings of the International Conference on Learning Representations, 1–9, 2021.
[42]
Xu, Y. F.; Wei, H. P.; Lin, M. X.; Deng, Y. Y.; Sheng, K. K.; Zhang, M. D.; Tang, F.; Dong, W. M.; Huang, F. Y.; Xu, C. S. Transformers in computational visual media: A survey. Computational Visual Media Vol. 8, No. 1, 33–62, 2022.
[43]
Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In: Proceedings of the 38th International Conference on Machine Learning, Vol. 139, 10347–10357, 2021.
[44]
Liu, Z.; Lin, Y. T.; Cao, Y.; Hu, H.; Wei, Y. X.; Zhang, Z.; Lin, S.; Guo, B. N. Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 9992–10002, 2021.
DOI
[45]
Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12346. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 213–229, 2020.
DOI
[46]
Zhu, X. Z.; Su, W. J.; Lu, L. W.; Li, B.; Wang, X. G.; Dai, J. F. Deformable DETR: Deformable transformers for end-to-end object detection. In: Proceedings of the International Conference on Learning Representations, 1–9, 2021.
[47]
Ye, L. W.; Rochan, M.; Liu, Z.; Wang, Y. Cross-modal self-attention network for referring image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10494–10503, 2019.
DOI
[48]
Yang, F. Z.; Yang, H.; Fu, J. L.; Lu, H. T.; Guo, B. N. Learning texture transformer network for image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5790–5799, 2020.
DOI
[49]
He, J.; Chen, J. N.; Liu, S.; Kortylewski, A.; Yang, C.; Bai, Y. T.; Wang, C. H. TransFG: A transformer architecture for fine-grained recognition. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 36, No. 1, 852–860, 2022.
[50]
Zhang, Y.; Cao, J.; Zhang, L.; Liu, X. C.; Wang, Z. Y.; Ling, F.; Chen, W. Q. A free lunch from ViT: Adaptive attention multi-scale fusion Transformer for fine-grained visual recognition. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 3234–3238, 2022.
DOI
[51]
Hu, Y. Q.; Jin, X.; Zhang, Y.; Hong, H. W.; Zhang, J. F.; He, Y.; Xue, H. RAMS-trans: Recurrent attention multi-scale transformer for fine-grained image recognition. In: Proceedings of the 29th ACM International Conference on Multimedia, 4239–4248, 2021.
DOI
[52]
Wang, J.; Yu, X. H.; Gao, Y. S. Feature fusion vision transformer for fine-grained visual categorization. In: Proceedings of the British Machine Vision Conference, 2021.
[53]
Liu, X. D.; Wang, L. L.; Han, X. G. Transformer with peak suppression and knowledge guidance for fine-grained image recognition. Neurocomputing Vol. 492, 137–149, 2022.
[54]
Chou, P. Y.; Lin, C. H.; Kao, W. C. A novel plug-in module for fine-grained visual classification. arXiv preprint arXiv:2202.03822, 2022.
[55]
Liu, Z.; Shen, Y.; Lakshminarasimhan, V. B.; Liang, P. P.; Bagher Zadeh, A.; Morency, L. P. Efficient low-rank multimodal fusion with modality-specific factors. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2247–2256, 2018.
DOI
[56]
Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001. California Institute of Technology, 2011.
[57]
Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M.; Vedaldi, A. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
[58]
Krause, J.; Stark, M.; Jia, D.; Li, F. F. 3D object representations for fine-grained categorization. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, 554–561, 2013.
DOI
[59]
Min, W. Q.; Liu, L. H.; Luo, Z. D.; Jiang, S. Q. Ingredient-guided cascaded multi-attention network for food recognition. In: Proceedings of the 27th ACM International Conference on Multimedia, 1331–1339, 2019.
DOI
[60]
Min, W. Q.; Liu, L. H.; Wang, Z. L.; Luo, Z. D.; Wei, X. M.; Wei, X. L.; Jiang, S. Q. ISIA food-500: A dataset for large-scale food recognition via stacked global-local attention network. In: Proceedings of the 28th ACM International Conference on Multimedia, 393–401, 2020.
DOI
[61]
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
DOI
[62]
Sheng, K. K.; Dong, W. M.; Huang, H. B.; Chai, M. L.; Zhang, Y.; Ma, C. Y.; Hu, B. G. Learning to assess visual aesthetics of food images. Computational Visual Media Vol. 7, No. 1, 139–152, 2021.
[63]
Zhao, T. Y.; Zhang, B. P.; He, M.; Wei, Z. G.; Zhou, N.; Yu, J.; Fan, J. P. Embedding visual hierarchy with deep networks for large-scale visual recognition. IEEE Transactions on Image Processing Vol. 27, No. 10, 4740–4755, 2018.
[64]
Wang, Y.; Liu, R. N.; Lin, D.; Chen, D. Y.; Li, P.; Hu, Q. H.; Philip Chen, C. L. Coarse-to-fine: Progressive knowledge transfer-based multitask convolutional neural network for intelligent large-scale fault diagnosis. IEEE Transactions on Neural Networks and Learning Systems Vol. 34, No. 2, 761–774, 2023.
[65]
Fan, J. P.; Zhao, T. Y.; Kuang, Z. Z.; Zheng, Y.; Zhang, J.; Yu, J.; Peng, J. Y. HD-MTL: Hierarchical deep multi-task learning for large-scale visual recognition. IEEE Transactions on Image Processing Vol. 26, No. 4, 1923–1938, 2017.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 21 July 2022
Accepted: 26 December 2022
Published: 03 January 2024
Issue date: April 2024

Copyright

© The Author(s) 2023.

Acknowledgements

This work was supported by National Key R&D Program of China (2019YFC1521102), the National Natural Science Foundation of China (61932003), and Beijing Science and Technology Plan (Z221100007722004).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return