AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

From Prototype to Inference: A Pipeline to Apply Deep Learning in Sorghum Panicle Detection

Chrisbin James1,Yanyang Gu2,Andries Potgieter3Etienne David4Simon Madec4Wei Guo5Frédéric Baret6Anders Eriksson2Scott Chapman1( )
School of Agriculture and Food Sciences, The University of Queensland, Brisbane, Australia
School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
Queensland Alliance for Agriculture and Food Innovation, The University of Queensland, Brisbane, Australia
Arvalis, Institut du Végétal, Paris, France
Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
Institut National de la Recherche Agronomique, Paris, France

†These author contributed equally to this work.

Show Author Information

Abstract

Head (panicle) density is a major component in understanding crop yield, especially in crops that produce variable numbers of tillers such as sorghum and wheat. Use of panicle density both in plant breeding and in the agronomy scouting of commercial crops typically relies on manual counts observation, which is an inefficient and tedious process. Because of the easy availability of red–green–blue images, machine learning approaches have been applied to replacing manual counting. However, much of this research focuses on detection per se in limited testing conditions and does not provide a general protocol to utilize deep-learning-based counting. In this paper, we provide a comprehensive pipeline from data collection to model deployment in deep-learning-assisted panicle yield estimation for sorghum. This pipeline provides a basis from data collection and model training, to model validation and model deployment in commercial fields. Accurate model training is the foundation of the pipeline. However, in natural environments, the deployment dataset is frequently different from the training data (domain shift) causing the model to fail, so a robust model is essential to build a reliable solution. Although we demonstrate our pipeline in a sorghum field, the pipeline can be generalized to other grain species. Our pipeline provides a high-resolution head density map that can be utilized for diagnosis of agronomic variability within a field, in a pipeline built without commercial software.

References

1
Costa C, Schurr U, Loreto F, Menesatti P, Carpentier S. Plant phenotyping research trends, a science mapping approach. Front Plant Sci. 2019;9:1933.
2
Shrestha DS, Steward BL. Automatic corn plant population measurement using machine vision. Trans ASAE. 2003;46(2):559–565.
3

Wu W, Liu T, Zhou P, Yang T, Li C, Zhong X, Sun C, Liu S, Guo W. Image analysis-based recognition and quantification of grain number per panicle in rice. Plant Methods. 2019;15:122.

4

Hao H, Li Z, Leng C, Lu C, Luo H, Liu Y, Wu X, Liu Z, Shang L, Jing HC. Sorghum breeding in the genomic era: Opportunities and challenges. Theor Appl Genet. 2021;134:1899–1924.

5

Mutava R, Prasad P, Tuinstra M, Kofoid K, Yu J. Characterization of sorghum genotypes for traits related to drought tolerance. Field Crop Res. 2011;123:10–18.

6

Monneveux P, Jing R, Misra SC. Phenotyping for drought adaptation in wheat using physiological traits. Front Physiol. 2012;3:429.

7
Li Y, Zhang X, Chen D. Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. Proc IEEE Conf Comput Vis Pattern Recognit. 2018;1091–1100.
8

Lu H, Cao Z, Xiao Y, Zhuang B, Shen C. Tasselnet: Counting maize tassels in the wild via local counts regression network. Plant Methods. 2017;13:79.

9

Lu H, Cao Z. TasselNetV2+: A fast implementation for high-throughput plant counting from high-resolution RGB imagery. Front Plant Sci. 2020;11:541960.

10
Jocher G, Stoken A, Borovec J; NanoCode012, Chaurasia A; TaoXie, Changyu L, V A; Laughing; tkianai, et al. ultralytics/ yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations, version v5.0, Zenodo, Apr. 2021 https://doi.org/10.5281/zenodo.4679653.
11
Tan M, Pang R, Le QV. EfficientDet: Scalable and efficient object detection, Paper presented at: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13–19; Seattle, WA.
12
Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards realtime object detection with region proposal networks. Adv Neural Inf Proces Syst. 2015;91–99.
13
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Paper presented at: Proceedings of the International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
14
Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. Paper presented at: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27–30; Las Vegas, NV.
15
He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. Paper presented at: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22–29; Venice, Italy.
16
Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21–26; Honolulu, HI.
17
Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv. 2018. https://doi.org/10.48550/arXiv.1804.02767
18
Bochkovskiy A, Wang C-Y, Liao H-YM. YOLOv4: Optimal speed and accuracy of object detection. arXiv. 2020. https://doi.org/10.48550/arXiv.2004.10934
19
Redmon J, Darknet: Open Source Neural Networks in C, http://pjreddie.com/darknet/, 2013–2016.
20
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Paper presented at: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition 2014 Jun 23–28; Columbus, OH.
21
Girshick R. Fast R-CNN. Paper presented at: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (ICCV); 2015 Dec 7–13; Santiago, Chile.
22
Dai J, Li Y, He K, Sun J. R-FCN: Object detection via region-based fully convolutional networks. Adv Neural Inf Proces Syst. 2016;379–387.
23

David E, Madec S, Sadeghi-Tehran P, Aasen H, Zheng B, Liu S, Kirchgessner N, Ishikawa G, Nagasawa K, Badhon MA, et al. Global Wheat Head Detection (GWHD) dataset: A large and diverse dataset of high-resolution RGB-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics. 2020;2020:3521852.

24
David E, Serouart M, Smith D, Madec S, Velumani K, Liu S, Wang X, Espinosa FP, Shafiee S, Tahir ISA, et al. Global Wheat Head Dataset 2021: More diversity to improve the benchmarking of wheat head localization methods. arXiv. 2021. https://doi.org/10.48550/arXiv.2105.07660
25
Fourati F, Mseddi WS, Attia R. Wheat head detection using deep, semi-supervised and ensemble learning. Can J Remote Sens. 2021;47(2):198–208.
26
Khaki S, Safaei N, Pham H, Wang L. Wheatnet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. arXiv. 2021. https://doi.org/10.48550/arXiv.2103.09408
27
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H, Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv. 2017. https://doi.org/10.48550/arXiv.1704.04861
28
Wu Y, Hu Y, Li L. BTWD: Bag of tricks for wheat detection. Paper presented at: Proceedings of the European Conference on Computer Vision 2020 Workshops; 2020 Aug 23–28; Glasgow, UK.
29
Ayalew TW, Ubbens JR, Stavness I. Unsupervised domain adaptation for plant organ counting. Paper presented at: Proceedings of the European conference on computer vision: Springer; 2020. p. 330–346.
30
James C, Gu Y, Chapman S, Guo W, David E, Madec S, Potgieter A, Eriksson A. Domain adaptation for plant organ detection with style transfer. Paper presented at: Proceedings of the 2021 Digital Image Computing: Techniques and Applications (DICTA); 2021 Nov 29; Gold Coast, Australia.
31
Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V. Domain-adversarial training of neural networks. J Mach Learn Res 2016;17:2096–2030.
32
Raff E, Sylvester J. Gradient reversal against discrimination: A fair neural network learning approach. Paper presented at: Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018 Oct 1–3; Turin, Italy.
33
Park T, Efros AA, Zhang R, Zhu J-Y. Contrastive learning for unpaired image-to-image translation. Paper presented at: Proceedings of the European Conference on Computer Vision 2020: 16th European Conference; 2020 Aug 23–28.
34
Zou H, Lu H, Li Y, Liu L, Cao Z. Maize tassels detection: A benchmark of the state of the art. Plant Methods 2020;16:108.
35
Guo W, Zheng B, Potgieter AB, Diot J, Watanabe K, Noshita K, Jordan DR, Wang X, Watson J, Ninomiya S, et al. Aerial imagery analysis – quantifying appearance and number of sorghum heads for applications in breeding and agronomy. Front Plant Sci. 2018;9:1544.
36
Ghosal S, Zheng B, Chapman SC, Potgieter AB, Jordan DR, Wang X, Singh AK, Singh A, Hirafuji M, Ninomiya S, et al. A weakly supervised deep learning framework for sorghum head detection and counting. Plant Phenomics. 2019;2019:1525874.
37
Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. Paper presented at: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22–29; Venice, Italy.
38
Lin Z, Guo W. Sorghum panicle detection and counting using unmanned aerial system images and deep learning. Front Plant Sci. 2020;11:534853.
39
Malambo L, Popescu S, Ku N-W, Rooney W, Zhou T, Moore S. A deep learning semantic segmentation-based approach for field-level sorghum panicle counting. Remote Sens 2019;11: 10.3390/rs11242939.
40
Badrinarayanan V, Handa A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. arXiv. 2015. https://doi.org/10.48550/arXiv.1505.07293
41
Ubbens JR, Ayalew TW, Shirtliffe S, Josuttes A, Pozniak C, Stavness I. Autocount: Unsupervised segmentation and counting of organs in field images. European Conference on Computer Vision. 2020;391–399.
42
Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34:2274–2282.
43

Liu L, Zhang X, Yu Y, Gao F, Yang Z. Real-time monitoring of crop phenology in the midwestern united states using viirs observations. Remote Sens. 2018;10:1540.

44
Yang Q, Shi L, Han J, Yu J, Huang K. A near real-time deep learning approach for detecting rice phenology based on uav images. Agric For Meteorol. 2020;287:107938.
45
Reza MN, Na IS, Baek SW and Lee KH. Automatic rice yield estimation using image processing technique. In: Analide C, Kim P, editors. Intelligent environments 2017. Amsterdam (Netherlands): IOS Press; 2017. p. 59–68.
46
Reza MN, Na IS, Baek SW, Lee K-H. Rice yield estimation based on k-means clustering with graph-cut segmentation using low-altitude uav images. Biosyst Eng. 2019;177:109–121.
47
K. Velumani, Lopez-Lozano R, Madec S, Guo W, Gillet J, Comar A, Baret F, Estimates of maize plant density from UAV RGB images using faster-RCNN detection model: Impact of the spatial resolution. arXiv. 2021. https://doi.org/10.48550/arXiv.2105.11857
48
Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL, European conference on computer vision 2014. Springer, Cham; 2014. Microsoft COCO: Common objects in context; p. 740–755.
49
Lewy D, Mańdziuk J. An overview of mixing augmentation methods and augmentation strategies. arXiv. 2021. https://doi.org/10.48550/arXiv.2107.09887
50
Fischler MA, Bolles RC. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24(6):381–395.
51
Wang P, Bayram B, Sertel E. A comprehensive review on deep learning based remote sensing image super-resolution methods. Earth Sci Rev. 2022;232:104110.
52
Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B. Swin transformer: Hierarchical vision transformer using shifted windows. Proc IEEE/CVF Int Conf Comput Vis. 2021:10012–10022.
53
Steiner A, Kolesnikov A, Zhai X, Wightman R, Uszkoreit J, Beyer L. How to train your vit? Data, augmentation, and regularization in vision transformers. arXiv. 2021. https://doi.org/10.48550/arXiv.2106.10270
54
Potgieter AB, Lobell DB, Hammer GL, Jordan DR, Davis P, Brider J. Yield trends under varying environmental conditions for sorghum and wheat across australia. Agric For Meteorol. 2016;228–229:276–285.
55
Potgieter A, Hammer G, Doherty A, De Voil P. A simple regional-scale model for forecasting sorghum yield across north-eastern australia. Agric For Meteorol. 2005;132(1–2):143–153.
Plant Phenomics
Article number: 0017
Cite this article:
James C, Gu Y, Potgieter A, et al. From Prototype to Inference: A Pipeline to Apply Deep Learning in Sorghum Panicle Detection. Plant Phenomics, 2023, 5: 0017. https://doi.org/10.34133/plantphenomics.0017

132

Views

5

Crossref

3

Web of Science

3

Scopus

0

CSCD

Altmetrics

Received: 28 August 2022
Accepted: 01 December 2022
Published: 16 January 2023
© 2023 Chrisbin James et al. Exclusive Licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License (CC BY 4.0).

Return