AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Eff-3DPSeg: 3D Organ-Level Plant Shoot Segmentation Using Annotation-Efficient Deep Learning

Liyi Luo1Xintong Jiang1Yu Yang1,2Eugene Roy Antony Samy1Mark Lefsrud1Valerio Hoyos-Villegas3Shangpeng Sun1( )
Bioresource Engineering Department, McGill University, Montreal, QC, Canada
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi, Jiangsu, China
Plant Science Department, McGill University, Montreal, QC, Canada
Show Author Information

Abstract

Reliable and automated 3-dimensional (3D) plant shoot segmentation is a core prerequisite for the extraction of plant phenotypic traits at the organ level. Combining deep learning and point clouds can provide effective ways to address the challenge. However, fully supervised deep learning methods require datasets to be point-wise annotated, which is extremely expensive and time-consuming. In our work, we proposed a novel weakly supervised framework, Eff-3DPSeg, for 3D plant shoot segmentation. First, high-resolution point clouds of soybean were reconstructed using a low-cost photogrammetry system, and the Meshlab-based Plant Annotator was developed for plant point cloud annotation. Second, a weakly supervised deep learning method was proposed for plant organ segmentation. The method contained (a) pretraining a self-supervised network using Viewpoint Bottleneck loss to learn meaningful intrinsic structure representation from the raw point clouds and (b) fine-tuning the pretrained model with about only 0.5% points being annotated to implement plant organ segmentation. After, 3 phenotypic traits (stem diameter, leaf width, and leaf length) were extracted. To test the generality of the proposed method, the public dataset Pheno4D was included in this study. Experimental results showed that the weakly supervised network obtained similar segmentation performance compared with the fully supervised setting. Our method achieved 95.1%, 96.6%, 95.8%, and 92.2% in the precision, recall, F1 score, and mIoU for stem–leaf segmentation for the soybean dataset and 53%, 62.8%, and 70.3% in the AP, AP@25, and AP@50 for leaf instance segmentation for the Pheno4D dataset. This study provides an effective way for characterizing 3D plant architecture, which will become useful for plant breeders to enhance selection processes. The trained networks are available at https://github.com/jieyi-one/EFF-3DPSEG.

References

1

Brown TB, Cheng R, Sirault XRR, Rungrat T, Murray KD, Trtilek M, Furbank RT, Badger M, Pogson BJ, Borevitz JO. TraitCapture: Genomic and environment modelling of plant phenomic data. Curr Opin Plant Biol. 2014;18:73–79.

2

Tardieu F, Cabrera-Bosquet L, Pridmore T, Bennett M. Plant Phenomics, from sensors to knowledge. Curr Biol. 2017;27(15):R770–R783.

3

Chawade A, van Ham J, Blomquist H, Bagge O, Alexandersson E, Ortiz R. High-throughput field-phenotyping tools for plant breeding and precision agriculture. Agronomy. 2019;9(5):258.

4

Li L, Zhang Q, Huang D. A review of imaging techniques for plant phenotyping. Sensors. 2014;14(11):20078–20111.

5

Bai G, Ge Y, Hussain W, Baenziger PS, Graef G. A multi-sensor system for high throughput field phenotyping in soybean and wheat breeding. Comput Electron Agric. 2016;128:181–192.

6

Brewer MT, Lang L, Fujimura K, Dujmovic N, Gray S, van der Knaap E. Development of a controlled vocabulary and software application to analyze fruit shape variation in tomato and other plant species. Plant Physiol. 2006;141(1):15–25.

7

Walter A, Silk WK, Schurr U. Environmental effects on spatial and temporal patterns of leaf and root growth. Annu Rev Plant Biol. 2009;60(1):279–304.

8

Sun S, Li C, Chee PW, Paterson AH, Jiang Y, Xu R, Robertson JS, Adhikari J, Shehzad T. Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering. ISPRS J Photogramm Remote Sens. 2020;160:195–207.

9

Gongal A, Amatya S, Karkee M, Zhang Q, Lewis K. Sensors and systems for fruit detection and localization: A review. Comput Electron Agric. 2015;116:8–19.

10
Saeed F, Li C. Plant organ segmentation from point clouds using Point-Voxel CNN. In: 2021 ASABE Annual International Virtual Meeting. St. Joseph (MI): ASABE; 2021. p. 1.
11

Jin S, Sun X, Wu F, Su Y, Li Y, Song S, Xu K, Ma Q, Baret F, Jiang D, et al. Lidar sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects. ISPRS J Photogramm Remote Sens. 2021;171:202–223.

12

Qiu R, Wei S, Zhang M, Li H, Sun H, Liu G, Li M. Sensors for measuring plant phenotyping: A review. Int J Agric Biol Eng. 2018;11(2):1–17.

13

Yuan H, Bennett RS, Wang N, Chamberlin KD. Development of a peanut canopy measurement system using a ground-based LiDAR sensor. Front Plant Sci. 2019;10:203.

14

Vázquez-Arellano M, Reiser D, Paraforos DS, Garrido-Izard M, Burce MEC, Griepentrog HW. 3-D reconstruction of maize plants using a time-of-flight camera. Comput Electron Agric. 2018;145:235–247.

15

Hu C, Li P, Pan Z. Phenotyping of poplar seedling leaves based on a 3D visualization method. Int J Agric Biol Eng. 2018;11:145–151.

16

Wu S, Wen W, Wang Y, Fan J, Wang C, Gou W, Guo X. MVS-Pheno: A portable and low-cost phenotyping platform for maize shoots using multiview stereo 3D reconstruction. Plant Phenomics. 2020;2020:1848437.

17

Rose JC, Paulus S, Kuhlmann H. Accuracy analysis of a multi-view stereo approach for Phenotyping of tomato plants at the organ level. Sensors. 2015;15(5):9651–9665.

18

Yang X, Strahler AH, Schaaf CB, Jupp DLB, Yao T, Zhao F, Wang Z, Culvenor DS, Newnham GJ, Lovell JL, et al. Three-dimensional forest reconstruction and structural parameter retrievals using a terrestrial full-waveform lidar instrument (echidna®). Remote Sens Environ. 2013;135:36–51.

19

Wu J, Cawse-Nicholson K, van Aardt J. 3D tree reconstruction from simulated small footprint waveform Lidar. Photogramm Eng Remote Sens. 2013;79(12):1147–1157.

20

Duan T, Chapman SC, Holland E, Rebetzke GJ, Guo Y, Zheng B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J Exp Bot. 2016;67(15):4523–4534.

21
Zermas D, Morellas V, Mulla D, Papanikolopoulos N. Estimating the Leaf Area Index of crops through the evaluation of 3D models. Paper presented at: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2017 Sep 24–28; Vancouver, British Columbia, Canada.
22

Jin S, Su Y, Wu F, Pang S, Gao S, Hu T, Liu J, Guo Q. Stem–leaf segmentation and phenotypic trait extraction of individual maize using terrestrial LiDAR data. IEEE Trans Geosci Remote Sens. 2019;57:1336–1346.

23

Shi W, van de Zedde R, Jiang H, Kootstra G. Plant-part segmentation using deep learning and multi-view vision. Biosyst Eng. 2019;187:81–95.

24

Li Y, Wen W, Miao T, Wu S, Yu Z, Wang X, Guo X, Zhao C. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning. Comput Electron Agric. 2022;193:106702.

25

Li D, Shi G, Li J, Chen Y, Zhang S, Xiang S, Jin S. PlantNet: A dual-function point cloud segmentation network for multiple plant species. ISPRS J Photogramm Remote Sens. 2022;184:243–263.

26

Jin S, Su Y, Gao S, Wu F, Ma Q, Xu K, Ma Q, Hu T, Liu J, Pang S, et al. Separating the structural components of maize for field phenotyping using terrestrial LiDAR data and deep convolutional neural networks. IEEE Trans Geosci Remote Sens. 2020;58(4):2644–2658.

27

Xie S, Gu J, Guo D, Qi CR, Guibas L, Litany O. PointContrast: Unsupervised pre-training for 3D point cloud understanding. Cham: Springer International Publishing; 2020.

28
Hou J, Graham B, Nießner M, Xie S. Exploring data-efficient 3D scene understanding with contrastive scene contexts. Paper presented at: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20–25; Nashville, TN, USA. p. 15587-15597.
29

Wu Y, Xu L. Crop organ segmentation and disease identification based on weakly supervised deep neural network. Agronomy. 2019;9(11):737.

30

Zhou L, Xiao Q, Taha MF, Xu C, Zhang C. Phenotypic analysis of diseased plant leaves using supervised and weakly supervised deep learning. Plant Phenomics. 2023;5:0022.

31

Schunck D, Magistri F, Rosu RA, Cornelißen A, Chebrolu N, Paulus S, Léon J, Behnke S, Stachniss C, Kuhlmann H, et al. Pheno4D: A spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis. PLOS ONE. 2021;16(8):e0256340.

32

Dutagaci H, Rasti P, Galopin G, Rousseau D. ROSE-X: An annotated data set for evaluation of 3D plant organ segmentation methods. Plant Methods. 2020;16(1):28.

33
Luo L, Sun S. Eff-PlantNet: An annotation-efficient 3D deep learning network for plant shoot segmentation using point clouds. In: 2022 ASABE Annual International Meeting. St. Joseph (MI): ASABE; 2022. p. 1.
34
Luo L, Tian B, Zhao H, Zhou G. Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck. arXiv. 2021. https://doi.org/10.48550/arXiv.2109.08553.
35

Tian B, Luo L, Zhao H, Zhou G. VIBUS: Data-efficient 3D scene parsing with Viewpoint Bottleneck and uncertainty-spectrum modeling. ISPRS J Photogramm Remote Sens. 2022;194:302–318.

36
Choy C, Gwak J, Savarese S. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. Paper presented at: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15–20; Long Beach, California, USA. p. 3075–3084.
37
Zhao H, Shi S, Liu S, Fu C-W, Jia J. PointGroup: DualSet Point Grouping for 3D Instance Segmentation. Paper presented at: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13–19; Seattle, WA, USA.
38

Miao T, Zhu C, Xu T, Yang T, Li N, Zhou Y, Deng H. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Comput Electron Agric. 2021;187:106310.

39
Qi CR, Su H, Kaichun M, Guibas LJ. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21–26; Honolulu, HI, USA.
40
Qi CR, Yi L, Su H, Guibas LJ. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, in Conference on Neural Information Processing Systems (NIPS); 2017 Dec 4–9; Long Beach, California, USA.
41
Liu Z, Tang H, Lin Y, Han S. Point-voxel cnn for efficient 3D deep learning. Paper presented at: 2019 Conference on Neural Information Processing Systems; 2019 Dec 8–14; Vancouver, British Columbia, Canada.
42

Li D, Li J, Xiang S, Pan A. PSegNet: Simultaneous semantic and instance segmentation for point clouds of plants. Plant Phenomics. 2022;2022:9787643.

Plant Phenomics
Article number: 0080
Cite this article:
Luo L, Jiang X, Yang Y, et al. Eff-3DPSeg: 3D Organ-Level Plant Shoot Segmentation Using Annotation-Efficient Deep Learning. Plant Phenomics, 2023, 5: 0080. https://doi.org/10.34133/plantphenomics.0080

152

Views

16

Crossref

11

Web of Science

13

Scopus

0

CSCD

Altmetrics

Received: 27 December 2022
Accepted: 23 July 2023
Published: 02 August 2023
© 2023 Liyi Luo et al. Exclusive licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License (CC BY 4.0).

Return