AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Semi-Self-Supervised Learning for Semantic Segmentation in Images with Dense Patterns

Keyhan Najafian1Alireza Ghanbari2Mahdi Sabet Kish3Mark Eramian1Gholam Hassan Shirdel2Ian Stavness1Lingling Jin1( )Farhad Maleki4( )
Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Mathematics Department, Faculty of Sciences, University of Qom, Qom, Iran
Department of Mathematics, Faculty of Mathematical Science, Shahid Beheshti University, Tehran, Iran
Department of Computer Science, University of Calgary, Calgary, Alberta, Canada
Show Author Information

Abstract

Deep learning has shown potential in domains with large-scale annotated datasets. However, manual annotation is expensive, time-consuming, and tedious. Pixel-level annotations are particularly costly for semantic segmentation in images with dense irregular patterns of object instances, such as in plant images. In this work, we propose a method for developing high-performing deep learning models for semantic segmentation of such images utilizing little manual annotation. As a use case, we focus on wheat head segmentation. We synthesize a computationally annotated dataset—using a few annotated images, a short unannotated video clip of a wheat field, and several video clips with no wheat—to train a customized U-Net model. Considering the distribution shift between the synthesized and real images, we apply three domain adaptation steps to gradually bridge the domain gap. Only using two annotated images, we achieved a Dice score of 0.89 on the internal test set. When further evaluated on a diverse external dataset collected from 18 different domains across five countries, this model achieved a Dice score of 0.73. To expose the model to images from different growth stages and environmental conditions, we incorporated two annotated images from each of the 18 domains to further fine-tune the model. This increased the Dice score to 0.91. The result highlights the utility of the proposed approach in the absence of large-annotated datasets. Although our use case is wheat head segmentation, the proposed approach can be extended to other segmentation tasks with similar characteristics of irregularly repeating patterns of object instances.

References

1

Wang W, Yang Y, Wang X, Wang W, Li J. Development of convolutional neural network and its application in image classification: A survey. Opt Eng. 2019;58(4):Article 040901.

2

Liu L, Ouyang W, Wang X, Fieguth P, Chen J, Liu X, Pietikäinen M. Deep learning for generic object detection: A survey. Int J Comput Vis. 2020;128(2):261–318.

3

Hafiz AM, Bhat GM. A survey on instance segmentation: State of the art. Int J Multimed Inf Retr. 2020;9(3):171–189.

4

Hao S, Zhou Y, Guo Y. A brief survey on semantic segmentation with deep learning. Neurocomputing. 2020;406:302–321.

5
Guo M-H, Lu C-Z, Hou Q, Liu Z, Cheng M-M, Hu S-M. SegNeXt: Rethinking convolutional attention design for semantic segmentation. arXiv. 2022. https://doi.org/10.48550/arXiv.2209.08575
6

Ubbens JR, Stavness I. Deep plant phenomics: A deep learning platform for complex plant phenotyping tasks. Front Plant Sci. 2017;8:1190.

7

Xiong X, Duan L, Liu L, Tu H, Yang P, Wu D, Chen G, Xiong L, Yang W, Liu Q. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods. 2017;13(1):Article 104.

8

Zheng Y-Y, Kong J-L, Jin X-B, Wang X-Y, Su T-L, Zuo M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors. 2019;19(5):1058.

9
Mardanisamani S, Maleki F, Hosseinzadeh Kassani S, Rajapaksa S, Duddu H, Wang M, Shirtliffe S, Ryu S, Josuttes A, Zhang T, et al. Crop lodging prediction from UAV-acquired images of wheat and canola using a DCNN augmented with handcrafted texture features. Paper presented at: Proceedings of Conference on Computer Vision and Pattern Recognition; 2019 Jun 16–17; Long Beach, CA.
10

Jin X-B, Yu X-H, Wang X-Y, Bai Y-T, Su T-L, Kong J-L. Deep learning predictor for sustainable precision agriculture based on internet of things system. Sustainability. 2020;12(4):1433.

11
Bhagat S, Kokare M, Haswani V, Hambarde P, Kamble R. WheatNet-Lite: A novel light weight network for wheat head detection. Paper presented at: Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021 Oct 11–17; Montreal, Canada.
12

Mardanisamani S, Eramian M. Segmentation of vegetation and microplots in aerial agriculture images: A survey. Plant Phenome J. 2022;5(1):Article e20042.

13

Scharr H, Minervini M, French AP, Klukas C, Kramer DM, Liu X, Luengo I, Pape J-M, Polder G, Vukadinovic D, et al. Leaf segmentation in plant phenotyping: A collation study. Mach Vis Appl. 2016;27(4):585–606.

14

Ullah HS, Asad MH, Bais A. End to end segmentation of canola field images using dilated U-net. IEEE Access. 2021;9:59741–59753.

15
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical image computing and computer-assisted intervention – MICCAI 2015. Cham (Switzerland): Springer; 2015. p. 234–241.
16

Das M, Bais A. DeepVeg: Deep learning model for segmentation of weed, canola, and canola flea beetle damage. IEEE Access. 2021;9:119367–119380.

17

Hussein BR, Malik OA, Ong W-H, Slik JWF. Automated extraction of phenotypic leaf traits of individual intact herbarium leaves from herbarium specimen images using deep learning based semantic segmentation. Sensors. 2021;21(13):4549.

18
Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors. Proceedings of the European conference on computer vision (ECCV). Germany: Springer; 2018. p. 801–818.
19

Alkhudaydi T, Reynolds D, Griffiths S, Zhou J, De La Iglesia B. An exploration of deep-learning based phenotypic analysis to detect spike regions in field conditions for UK bread wheat. Plant Phenomics. 2019;2019:Article 7368761.

20
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015 Jun 7–12; Boston, MA.
21

David E, Madec S, Sadeghi-Tehran P, Aasen H, Zheng B, Liu S, Kirchgessner N, Ishikawa G, Nagasawa K, Badhon MA, et al. Global wheat head detection (GWHD) dataset: A large and diverse dataset of high-resolution RGB-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics, vol. 2020;2020.

22

David E, Serouart M, Smith D, Madec S, Velumani K, Liu S, Wang X, Pinto F, Shafiee S, Tahir IS, et al. Global wheat head detection 2021: An improved dataset for benchmarking wheat head detection methods. Plant Phenomics. 2021;2021:Article 9846158.

23

Fourati F, Mseddi WS, Attia R. Wheat head detection using deep, semi-supervised and ensemble learning. Can J Remote Sens. 2021;47(2):198–208.

24
Najafian K, Ghanbari A, Stavness I, Jin L, Shirdel G. H, Maleki F. A Semi-Self-Supervised learning approach for wheat head detection using extremely small number of labeled samples. Paper presented at: Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021 Oct 11–17; Montreal, Canada.
25

Khaki S, Safaei N, Pham H, Wang L. WheatNet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. Neurocomputing. 2022;489:78–89.

26
Han F, Li J. Wheat heads detection via YOLOv5 with weighted coordinate attention. Paper presented at: 2022 7th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). IEEE; 2022 Apr 22–24; Chengdu, China.
27
Sadeghi-Tehran P, Virlet N, Ampe EM, Reyns P, Hawkesford MJ. DeepCount: In-field automatic quantification of wheat spikes using simple linear iterative clustering and deep convolutional neural networks. Front Plant Sci. 2019;10:1176.
28

Ma J, Li Y, Liu H, Du K, Zheng F, Wu Y, Zhang L. Improving segmentation accuracy for ears of winter wheat at flowering stage by semantic segmentation. Comput Electron Agric. 2020;176:105662.

29

Tan C, Zhang P, Zhang Y, Zhou X, Wang Z, Du Y, Mao W, Li W, Wang D, Guo W. Rapid recognition of field-grown wheat spikes based on a superpixel segmentation algorithm using digital images. Front Plant Sci. 2020;11:259.

30

Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–2282.

31
Ubbens JR, Ayalew TW, Shirtliffe S, Josuttes A, Pozniak C, Stavness I, Autocount: Unsupervised segmentation and counting of organs in field images. In: European Conference on Computer Vision. Springer;2020. p. 391–399.
32

Rawat S, Chandra AL, Desai SV, Balasubramanian VN, Ninomiya S, Guo W. How useful is image-based active learning for plant organ segmentation? Plant Phenomics. 2022;2022:9795275.

33

Schmarje L, Santarossa M, Schröder S-M, Koch R. A survey on semi-, self- and unsupervised learning for image classification. IEEE Access. 2021;9:82 146–82 168.

34
Pauletto L, Amini M-R, Winckler N, Self semi supervised neural architecture search for semantic segmentation. arXiv:2201.12646. 2022. https://doi.org/10.48550/arXiv.2201.12646
35

Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv Neural Inf Proces Syst. 2015;28:91–99.

36
Tan M, Pang R, Le QV, Efficientdet: Scalable and efficient object detection. Paper presented at Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2020 Jun 13–19; Seattle, WA.
37
Dwibedi D, Misra I, Hebert M, Cut, paste and learn: Surprisingly easy synthesis for instance detection. Paper presented at: Proceedings of the IEEE International Conference on Computer Vision; 2017 Oct 22–29; Venice, Italy.
38
Bochkovskiy A, Wang C-Y, Liao H-YM, YOLOv4: Optimal speed and accuracy of object detection. arXiv. 2020. https://doi.org/10.48550/arXiv.2004.10934
39

Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (VOC) challenge. Int J Comput Vis. 2010;88(2):303–338.

40
Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL, Microsoft coco: Common objects in context. In: European Conference on Computer Vision. Springer; 2014. p. 740–755.
41
Simard P, Steinkraus D, Platt J. Best practices for convolutional neural networks applied to visual document analysis. Paper presented at: Seventh International Conference on Document Analysis and Recognition; 2003 Aug 6–6; Edinburgh, UK.
42

Buslaev A, Iglovikov VI, Khvedchenya E, Parinov A, Druzhinin M, Kalinin AA. Albumentations: Fast and flexible image augmentations. Information. 2020;11(2):125.

43
Tan M, Le Q, EfficientNet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning. PMLR; 2019. pp. 6105–6114.
44
Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proces Syst. 2012;1097–1105.
45
Yakubovskiy P. Segmentation models pytorch. GitHub. 2020. https://github.com/qubvel/segmentation-models.pytorch.
46
Ruder S, An overview of gradient descent optimization algorithms. arXiv. 2016. https://doi.org/10.48550/arXiv.1609.04747
47

Maleki F, Muthukrishnan N, Ovens K, Reinhold C, Forghani R. Machine learning algorithm validation: From essentials to advanced applications and implications for regulatory certification and deployment. Neuroimaging Clin N Am. 2020;30(4):433–445.

48
Dyrmann M, Mortensen AK, Midtiby HS, Jørgensen RN, “Pixel-wise classification of weeds and crops in images by using a fully convolutional neural network,” Paper presented at: Proceedings of the International Conference on Agricultural Engineering; 2016 June 26–29; Aarhus, Denmark.
49

Gao J, French AP, Pound MP, He Y, Pridmore TP, Pieters JG. Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields. Plant Methods. 2020;16:29.

50

Sapkota BB, Popescu S, Rajan N, Leon RG, Reberg-Horton C, Mirsky S, Bagavathiannan MV. Use of synthetic images for training a deep learning model for weed detection and biomass estimation in cotton. Sci Rep. 2022;12:19580.

51
Maleki F, Ovens K, Gupta R, Reinhold C, Spatz A, Forghani R. Generalizability of machine learning models: Quantitative evaluation of three methodological pitfalls. Radiol Artif Intell. 5:1.
52
Ahmadi A, Halstead M, McCool C, Virtual temporal samples for recurrent neural networks: Applied to semantic segmentation in agriculture. In: DAGM German conference on pattern recognition. Springer; 2021. pp. 574–588.
53

Wu T, Tang S, Zhang R, Cao J, Zhang Y. CGNet: A light-weight context guided network for semantic segmentation. IEEE Trans Image Process. 2020;30:1169–1179.

54

Zhang W, Liu Y, Chen K, Li H, Duan Y, Wu W, Shi Y, Guo W. Lightweight fruit-detection algorithm for edge computing applications. Front Plant Sci. 2021;12:740936.

Plant Phenomics
Article number: 0025
Cite this article:
Najafian K, Ghanbari A, Sabet Kish M, et al. Semi-Self-Supervised Learning for Semantic Segmentation in Images with Dense Patterns. Plant Phenomics, 2023, 5: 0025. https://doi.org/10.34133/plantphenomics.0025

170

Views

6

Crossref

4

Web of Science

5

Scopus

0

CSCD

Altmetrics

Received: 18 August 2022
Accepted: 17 January 2023
Published: 24 February 2023
© 2023 Keyhan Najafian et al. Exclusive Licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License (CC BY 4.0).

Return