Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Deep learning has shown potential in domains with large-scale annotated datasets. However, manual annotation is expensive, time-consuming, and tedious. Pixel-level annotations are particularly costly for semantic segmentation in images with dense irregular patterns of object instances, such as in plant images. In this work, we propose a method for developing high-performing deep learning models for semantic segmentation of such images utilizing little manual annotation. As a use case, we focus on wheat head segmentation. We synthesize a computationally annotated dataset—using a few annotated images, a short unannotated video clip of a wheat field, and several video clips with no wheat—to train a customized U-Net model. Considering the distribution shift between the synthesized and real images, we apply three domain adaptation steps to gradually bridge the domain gap. Only using two annotated images, we achieved a Dice score of 0.89 on the internal test set. When further evaluated on a diverse external dataset collected from 18 different domains across five countries, this model achieved a Dice score of 0.73. To expose the model to images from different growth stages and environmental conditions, we incorporated two annotated images from each of the 18 domains to further fine-tune the model. This increased the Dice score to 0.91. The result highlights the utility of the proposed approach in the absence of large-annotated datasets. Although our use case is wheat head segmentation, the proposed approach can be extended to other segmentation tasks with similar characteristics of irregularly repeating patterns of object instances.
Wang W, Yang Y, Wang X, Wang W, Li J. Development of convolutional neural network and its application in image classification: A survey. Opt Eng. 2019;58(4):Article 040901.
Liu L, Ouyang W, Wang X, Fieguth P, Chen J, Liu X, Pietikäinen M. Deep learning for generic object detection: A survey. Int J Comput Vis. 2020;128(2):261–318.
Hafiz AM, Bhat GM. A survey on instance segmentation: State of the art. Int J Multimed Inf Retr. 2020;9(3):171–189.
Hao S, Zhou Y, Guo Y. A brief survey on semantic segmentation with deep learning. Neurocomputing. 2020;406:302–321.
Ubbens JR, Stavness I. Deep plant phenomics: A deep learning platform for complex plant phenotyping tasks. Front Plant Sci. 2017;8:1190.
Xiong X, Duan L, Liu L, Tu H, Yang P, Wu D, Chen G, Xiong L, Yang W, Liu Q. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods. 2017;13(1):Article 104.
Zheng Y-Y, Kong J-L, Jin X-B, Wang X-Y, Su T-L, Zuo M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors. 2019;19(5):1058.
Jin X-B, Yu X-H, Wang X-Y, Bai Y-T, Su T-L, Kong J-L. Deep learning predictor for sustainable precision agriculture based on internet of things system. Sustainability. 2020;12(4):1433.
Mardanisamani S, Eramian M. Segmentation of vegetation and microplots in aerial agriculture images: A survey. Plant Phenome J. 2022;5(1):Article e20042.
Scharr H, Minervini M, French AP, Klukas C, Kramer DM, Liu X, Luengo I, Pape J-M, Polder G, Vukadinovic D, et al. Leaf segmentation in plant phenotyping: A collation study. Mach Vis Appl. 2016;27(4):585–606.
Ullah HS, Asad MH, Bais A. End to end segmentation of canola field images using dilated U-net. IEEE Access. 2021;9:59741–59753.
Das M, Bais A. DeepVeg: Deep learning model for segmentation of weed, canola, and canola flea beetle damage. IEEE Access. 2021;9:119367–119380.
Hussein BR, Malik OA, Ong W-H, Slik JWF. Automated extraction of phenotypic leaf traits of individual intact herbarium leaves from herbarium specimen images using deep learning based semantic segmentation. Sensors. 2021;21(13):4549.
Alkhudaydi T, Reynolds D, Griffiths S, Zhou J, De La Iglesia B. An exploration of deep-learning based phenotypic analysis to detect spike regions in field conditions for UK bread wheat. Plant Phenomics. 2019;2019:Article 7368761.
David E, Madec S, Sadeghi-Tehran P, Aasen H, Zheng B, Liu S, Kirchgessner N, Ishikawa G, Nagasawa K, Badhon MA, et al. Global wheat head detection (GWHD) dataset: A large and diverse dataset of high-resolution RGB-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics, vol. 2020;2020.
David E, Serouart M, Smith D, Madec S, Velumani K, Liu S, Wang X, Pinto F, Shafiee S, Tahir IS, et al. Global wheat head detection 2021: An improved dataset for benchmarking wheat head detection methods. Plant Phenomics. 2021;2021:Article 9846158.
Fourati F, Mseddi WS, Attia R. Wheat head detection using deep, semi-supervised and ensemble learning. Can J Remote Sens. 2021;47(2):198–208.
Khaki S, Safaei N, Pham H, Wang L. WheatNet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. Neurocomputing. 2022;489:78–89.
Ma J, Li Y, Liu H, Du K, Zheng F, Wu Y, Zhang L. Improving segmentation accuracy for ears of winter wheat at flowering stage by semantic segmentation. Comput Electron Agric. 2020;176:105662.
Tan C, Zhang P, Zhang Y, Zhou X, Wang Z, Du Y, Mao W, Li W, Wang D, Guo W. Rapid recognition of field-grown wheat spikes based on a superpixel segmentation algorithm using digital images. Front Plant Sci. 2020;11:259.
Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–2282.
Rawat S, Chandra AL, Desai SV, Balasubramanian VN, Ninomiya S, Guo W. How useful is image-based active learning for plant organ segmentation? Plant Phenomics. 2022;2022:9795275.
Schmarje L, Santarossa M, Schröder S-M, Koch R. A survey on semi-, self- and unsupervised learning for image classification. IEEE Access. 2021;9:82 146–82 168.
Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv Neural Inf Proces Syst. 2015;28:91–99.
Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (VOC) challenge. Int J Comput Vis. 2010;88(2):303–338.
Buslaev A, Iglovikov VI, Khvedchenya E, Parinov A, Druzhinin M, Kalinin AA. Albumentations: Fast and flexible image augmentations. Information. 2020;11(2):125.
Maleki F, Muthukrishnan N, Ovens K, Reinhold C, Forghani R. Machine learning algorithm validation: From essentials to advanced applications and implications for regulatory certification and deployment. Neuroimaging Clin N Am. 2020;30(4):433–445.
Gao J, French AP, Pound MP, He Y, Pridmore TP, Pieters JG. Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields. Plant Methods. 2020;16:29.
Sapkota BB, Popescu S, Rajan N, Leon RG, Reberg-Horton C, Mirsky S, Bagavathiannan MV. Use of synthetic images for training a deep learning model for weed detection and biomass estimation in cotton. Sci Rep. 2022;12:19580.
Wu T, Tang S, Zhang R, Cao J, Zhang Y. CGNet: A light-weight context guided network for semantic segmentation. IEEE Trans Image Process. 2020;30:1169–1179.
Zhang W, Liu Y, Chen K, Li H, Duan Y, Wu W, Shi Y, Guo W. Lightweight fruit-detection algorithm for edge computing applications. Front Plant Sci. 2021;12:740936.
Distributed under a Creative Commons Attribution License (CC BY 4.0).