AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Improved Field-Based Soybean Seed Counting and Localization with Feature Level Considered

Jiangsan Zhao1Akito Kaga2Tetsuya Yamada2Kunihiko Komatsu3Kaori Hirata4Akio Kikuchi4Masayuki Hirafuji1Seishi Ninomiya1Wei Guo1( )
Graduate School of Agriculture and Life Sciences, The University of Tokyo, Tokyo, Japan
Institute of Crop Sciences, National Agriculture and Food Research Organization, Tsukuba, Ibaraki, Japan
Western Region Agricultural Research Center, National Agriculture and Food Research Organization, Fukuyama, Hiroshima, Japan
Tohoku Agricultural Research Center, National Agriculture and Food Research Organization, Morioka, Iwate, Japan
Show Author Information

Abstract

Developing automated soybean seed counting tools will help automate yield prediction before harvesting and improving selection efficiency in breeding programs. An integrated approach for counting and localization is ideal for subsequent analysis. The traditional method of object counting is labor-intensive and error-prone and has low localization accuracy. To quantify soybean seed directly rather than sequentially, we propose a P2PNet-Soy method. Several strategies were considered to adjust the architecture and subsequent postprocessing to maximize model performance in seed counting and localization. First, unsupervised clustering was applied to merge closely located overcounts. Second, low-level features were included with high-level features to provide more information. Third, atrous convolution with different kernel sizes was applied to low- and high-level features to extract scale-invariant features to factor in soybean size variation. Fourth, channel and spatial attention effectively separated the foreground and background for easier soybean seed counting and localization. At last, the input image was added to these extracted features to improve model performance. Using 24 soybean accessions as experimental materials, we trained the model on field images of individual soybean plants obtained from one side and tested them on images obtained from the opposite side, with all the above strategies. The superiority of the proposed P2PNet-Soy in soybean seed counting and localization over the original P2PNet was confirmed by a reduction in the value of the mean absolute error, from 105.55 to 12.94. Furthermore, the trained model worked effectively on images obtained directly from the field without background interference.

References

1

Pojić M, Mišan A, Tiwari B. Eco-innovative technologies for extraction of proteins for human consumption from renewable protein sources of plant origin. Trends Food Sci Technol. 2018;75:93–104.

2

Weiner J. Looking in the wrong direction for higher-yielding crop genotypes. Trends Plant Sci. 2019;24(10):927–933.

3

Patil G, Mian R, Vuong T, Pantalone V, Song Q, Chen P, Shannon GJ, Carter TC, Nguyen HT. Molecular mapping and genomics of soybean seed protein: A review and perspective for the future. Theor Appl Genet. 2017;130(10):1975–1991.

4

Wei MCF, Molin JP. Soybean yield estimation and its components: A linear regression approach. Agriculture. 2020;10(8):348.

5

Stewart-Brown BB, Song Q, Vaughn JN, Li Z. Genomic selection for yield and seed composition traits within an applied soybean breeding program. G3-Genes Genom Genet 2019;9(7):2253–2265.

6

Maimaitijiang M, Sagan V, Sidike P, Hartling S, Esposito F, Fritschi FB. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens Environ. 2020;237:111599.

7

Zhang X, Zhao J, Yang G, Liu J, Cao J, Li C, Zhao X, Gai J. Establishment of plot-yield prediction models in soybean breeding programs using UAV-based hyperspectral remote sensing. Remote Sens. 2019;11(23):2752.

8

Schwalbert RA, Amado T, Corassa G, Pott LP, Prasad PVV, Ciampitti IA. Satellite-based soybean yield forecast: Integrating machine learning and weather data for improving crop yield prediction in southern Brazil. Agric For Meteorol. 2020;284:107886.

9

Riera LG, Carroll ME, Zhang Z, Shook JM, Ghosal S, Gao T, Singh A, Bhattacharya S, Ganapathysubramanian B, Singh AK, et al. Deep multiview image fusion for soybean yield estimation in breeding applications. Plant Phenomics. 2021;2021:9846470.

10

Ning H, Yuan J, Dong Q, Li W, Xue H, Wang Y, Tian Y, Li W-X. Identification of QTLs related to the vertical distribution and seed-set of pod number in soybean [Glycine max (L.) Merri]. PLOS ONE. 2018;13(4):e0195830.

11

Liu S, Zhang M, Feng F, Tian Z. Toward a “green revolution” for soybean. Mol Plant. 2020;13(5):688–697.

12

Li Y, Jia J, Zhang L, Khattak AM, Sun S, Gao W, Wang M. Soybean seed counting based on pod image using two-column convolution neural network. IEEE Access. 2019;7:64177–64185.

13

Uzal LC, Grinblat GL, Namías R, Larese MG, Bianchi JS, Morandi EN, Granitto PM. Seed-per-pod estimation for plant breeding using deep learning. Comput Electron Agric. 2018;150:196–204.

14

Madec S, Jin X, Lu H, De Solan B, Liu S, Duyme F, Heritier E, Baret F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric For Meteorol. 2019;264:225–234.

15

Zou H, Lu H, Li Y, Liu L, Cao Z. Maize tassels detection: A benchmark of the state of the art. Plant Methods. 2020;16:108.

16

Cointault F, Guerin D, Guillemin J-P, Chopinet B. In-field Triticum aestivum ear counting using colour-texture image analysis. N Z J Crop Hortic Sci. 2008;36(2):117–130.

17

Dorj U-O, Lee M, Yun S. An yield estimation in citrus orchards via fruit detection and counting using image processing. Comput Electron Agric. 2017;140:103–112.

18

Liu T, Chen W, Wang Y, Wu W, Sun C, Ding J, Guo W. Rice and wheat grain counting method and software development based on Android system. Comput Electron Agric. 2017;141:302–309.

19

Mussadiq Z, Laszlo B, Helyes L, Gyuricza C. Evaluation and comparison of open source program solutions for automatic seed counting on digital images. Comput Electron Agric. 2015;117:194–199.

20

Kurtulmus F, Lee WS, Vardar A. Green citrus detection using ‘eigenfruit’, color and circular Gabor texture features under natural outdoor conditions. Comput Electron Agric. 2011;78(2):140–149.

21

Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst. 2015;28:01497.

22
Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. Paper presented at: 2016 IEEE. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CPVR); 2016 Jun 27–30; Las Vegas, NV. p. 779–788.
23
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Paper presented at: MICCAI 2015. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer, 2015; pp. 234–241.
24

Chen SW, Shivakumar SS, Dcunha S, Das J, Okon E, Qu C, Taylor CJ, Kumar V. Counting apples and oranges with deep learning: A data-driven approach. IEEE Robot Autom Lett. 2017;2(2):781–788.

25
Wang P, Li Y, Vasconcelos N. Rethinking and improving the robustness of image style transfer. Paper presented at: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021 Jun 20–25; Nashville, TN. p. 124–133.
26
Huang R, Pedoeem J, Chen C. YOLO-LITE: A real-time object detection algorithm optimized for non-GPU computers. Paper presented at: Proceedings of the 2018 IEEE International Conference on Big Data (Big Data); IEEE; 2018 Dec 10–13; Seattle, WA. p. 2503–2510.
27

Jiang P, Ergu D, Liu F, Cai Y, Ma B. A review of YOLO algorithm developments. Procedia Comput Sci. 2022;199:1066–1073.

28
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Paper presented at: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015 Jun 7–12; Boston, MA. p. 3431–3440.
29

Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2017;40(4):834–848.

30

Wu J, Yang G, Yang X, Xu B, Han L, Zhu Y. Automatic counting of in situ rice seedlings from UAV images based on a deep fully convolutional neural network. Remote Sens. 2019;11(6):691.

31

Osco LP, dos Santos de Arruda M, Marcato Junior J, da Silva NB, Ramos APM, Moryia ÉAS, Imai NN, Pereira DR, Creste JE, Matsubara ET, et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J Photogramm Remote Sens. 2020;160:97–106.

32

Lu H, Cao Z, Xiao Y, Zhuang B, Shen C. TasselNet: Counting maize tassels in the wild via local counts regression network. Plant Methods. 2017;13:Article 79.

33

Xiong H, Cao Z, Lu H, Madec S, Liu L, Shen C. TasselNetv2: In-field counting of wheat spikes with context-augmented local regression networks. Plant Methods. 2019;15:Article 150.

34

Lu H, Liu L, Li Y-N, Zhao X-M, Wang X-Q, Cao Z-G. Tasselnetv3: Explainable plant counting with guided upsampling and background suppression. IEEE Trans Geosci Remote Sens. 2021;60:1–15.

35
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Paper presented at: Proceedings of the In Proceedings of the IEEE conference on computer vision and pattern recognition; 2016 Jun 27–30; Las Vegas, NV. p. 770–778.
36
Zhao T, Wu X. Pyramid feature attention network for saliency detection. Paper presented at: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019 Jun 15–20; Long Beach, CA. p. 3085–3094.
37
Song Q, Wang C, Jiang Z, Wang Y, Tai Y, Wang C, Li J, Huang F, Wu Y. Rethinking counting and localization in crowds: A purely point-based framework. Paper presented at: Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021 Oct 10–17; Montreal, QC, Canada. p. 3365–3374.
38
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. 2014. https://doi.org/10.48550/arXiv.1409.1556
39
Pharr M, Jakob W, Humphreys G. Physically based rendering: From theory to implementation. Morgan Kaufmann; 2016.
40

Munkres J. Algorithms for the assignment and transportation problems. J Soc Ind Appl Math. 1957;5(1):32–38.

41
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T. editors. Proceedings of the European conference on computer vision. Springer, 2014; p. 818–833.
42
Wang H, Wang Z, Jia M, Li A, Feng T, Zhang W, Jiao L. Spatial attention for multi-scale feature refinement for object detection. Paper presented at: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops; 2019 Oct 27–28; Seoul, Korea (South).
43
Chen L, Zhang H, Xiao J, Nie L, Shao J, Liu W, Chua T-S. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. Paper presented at: Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition; 2017 Jul 21–26; Honolulu, HI. p. 5659–5667.
44
Guan T, Zhu H. Atrous faster R-CNN for small scale object detection. Paper presented at: Proceedings of the 2017 2nd International Conference on Multimedia and Image Processing (ICMIP); IEEE, 2017 Mar 17–19; Wuhan, China. p. 16–21.
45
Boominathan L, Kruthiventi SSS, Babu RV. Crowdnet: A deep convolutional network for dense crowd counting. Paper presented at: Proceedings of the 24th ACM international conference on Multimedia; 2016; Amsterdam, The Netherlands. p. 640–644.
46
Liu S, Huang D, Wang Y. Adaptive nms: Refining pedestrian detection in a crowd. Paper presented at: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019 Jun 15–20; Long Beach, CA. p. 6459–6468.
47

Lu R, Ma H, Wang Y. Semantic head enhanced pedestrian detection in a crowd. Neurocomputing. 2020;400:343–351.

48
Wan J, Liu Z, Chan AB. A generalized loss function for crowd counting and localization. Paper presented at: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021 Jun 20–25; Nashville, TN. p. 1974–1983.
49
Zhang W, Wang J, Liu Y, Chen K, Li H, Duan Y, Wu W, Shi Y, Guo W. Deep-learning-based in-field citrus fruit detection and tracking. Hortic Res 2022;9:Article uhac003.
Plant Phenomics
Article number: 0026
Cite this article:
Zhao J, Kaga A, Yamada T, et al. Improved Field-Based Soybean Seed Counting and Localization with Feature Level Considered. Plant Phenomics, 2023, 5: 0026. https://doi.org/10.34133/plantphenomics.0026

254

Views

27

Crossref

25

Web of Science

27

Scopus

0

CSCD

Altmetrics

Received: 22 September 2022
Accepted: 01 February 2023
Published: 15 March 2023
© 2023 Jiangsan Zhao et al. Exclusive Licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License (CC BY 4.0).

Return