Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
The green fraction (GF), which is the fraction of green vegetation in a given viewing direction, is closely related to the light interception ability of the crop canopy. Monitoring the dynamics of GF is therefore of great interest for breeders to identify genotypes with high radiation use efficiency. The accuracy of GF estimation depends heavily on the quality of the segmentation dataset and the accuracy of the image segmentation method. To enhance segmentation accuracy while reducing annotation costs, we developed a self-supervised strategy for deep learning semantic segmentation of rice and wheat field images with very contrasting field backgrounds. First, the Digital Plant Phenotyping Platform was used to generate large, perfectly labeled simulated field images for wheat and rice crops, considering diverse canopy structures and a wide range of environmental conditions (sim dataset). We then used the domain adaptation model cycle-consistent generative adversarial network (CycleGAN) to bridge the reality gap between the simulated and real images (real dataset), producing simulation-to-reality images (sim2real dataset). Finally, 3 different semantic segmentation models (U-Net, DeepLabV3+, and SegFormer) were trained using 3 datasets (real, sim, and sim2real datasets). The performance of the 9 training strategies was assessed using real images captured from various sites. The results showed that SegFormer trained using the sim2real dataset achieved the best segmentation performance for both rice and wheat crops (rice: Accuracy = 0.940, F1-score = 0.937; wheat: Accuracy = 0.952, F1-score = 0.935). Likewise, favorable GF estimation results were obtained using the above strategy (rice: R2 = 0.967, RMSE = 0.048; wheat: R2 = 0.984, RMSE = 0.028). Compared with SegFormer trained using a real dataset, the optimal strategy demonstrated greater superiority for wheat images than for rice images. This discrepancy can be partially attributed to the differences in the backgrounds of the rice and wheat fields. The uncertainty analysis indicated that our strategy could be disrupted by the inhomogeneity of pixel brightness and the presence of senescent elements in the images. In summary, our self-supervised strategy addresses the issues of high cost and uncertain annotation accuracy during dataset creation, ultimately enhancing GF estimation accuracy for rice and wheat field images. The best weights we trained in wheat and rice are available: https://github.com/PheniX-Lab/sim2real-seg.
Li W, Fang H, Wei S, Weiss M, Baret F. Critical analysis of methods to estimate the fraction of absorbed or intercepted photosynthetically active radiation from ground measurements: Application to rice crops. Agric For Meteorol. 2021;297:108273.
Baret F, Andrieu B, Steven M. Gap frequency and canopy architecture of sugar beet and wheat crops. Agric For Meteorol. 1993;65(3-4):261–279.
Liu S, Baret F, Abichou M, Boudon F, Thomas S, Zhao K, Fournier C, Andrieu B, Irfan K, Hemmerlé M, et al. Estimating wheat green area index from ground-based LiDAR measurement using a 3D canopy structure model. Agric For Meteorol. 2017;247:12–20.
Luis Araus J, Cairns JE. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014;19(1):52–61.
Weiss M, Baret F, Smith GJ, Jonckheere I, Coppin P. Review of methods for in situ leaf area index (LAI) determination part Ⅱ. Estimation of LAI, errors and sampling. Agric For Meteorol. 2004;121(1-2):37–53.
Castillo-Martínez MÁ, Gallegos-Funes FJ, Carvajal-Gámez BE, Urriolagoitia-Sosa G, Rosales-Silva AJ. Color index based thresholding method for background and foreground segmentation of plant images. Comput Electron Agric. 2020;178:105783.
Meyer GE, Neto JC. Verification of color vegetation indices for automated crop imaging applications. Comput Electron Agric. 2008;63(2):282–293.
Hamuda E, Glavin M, Jones E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput Electron Agric. 2016;125:184–199.
Ruiz-Ruiz G, Gómez-Gil J, Navas-Gracia LM. Testing different color spaces based on hue for the environmentally adaptive segmentation algorithm (EASA). Comput Electron Agric. 2009;68(1):88–96.
Zheng L, Shi D, Zhang J. Segmentation of green vegetation of crop canopy images based on mean shift and fisher linear discriminant. Pattern Recogn Lett. 2010;31(9):920–925.
Guo W, Rage UK, Ninomiya S. Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput Electron Agric. 2013;96:58–66.
Giménez-Gallego J, González-Teruel JD, Jiménez-Buendía M, Toledo-Moreo AB, Soto-Valles F, Torres-Sánchez R. Segmentation of multiple tree leaves pictures with natural backgrounds using deep learning for image-based agriculture applications. Appl Sci. 2019;10(1):202.
Wang H, Lyu S, Ren Y. Paddy rice imagery dataset for panicle segmentation. Agronomy. 2021;11(8):1542.
Itakura K, Hosoi F. Automatic leaf segmentation for estimating leaf area and leaf inclination angle in 3D plant images. Sensors. 2018;18(10):3576.
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444.
Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: A brief review. Comput Intell Neurosci. 2018;2018:7068349.
Zhang Q, Liu Y, Gong C, Chen Y, Yu H. Applications of deep learning for dense scenes analysis in agriculture: A review. Sensors. 2020;20(5):1520.
Liu S, Martre P, Buis S, Andrieu MAB, Baret F. Estimation of plant and canopy architectural traits using the digital plant phenotyping platform. Plant Physiol. 2019;181(3):881–890.
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley W, Ozair S, Courville A, Bengio Y. Generative adversarial networks. Commun ACM. 2020;63(11):139–144.
Zhang J, Tai L, Yun P, Xiong Y, Liu M, Boedecker J, Burgard W. VR-goggles for robots: Real-to-Sim domain adaptation for visual control. IEEE Robot Autom Lett. 2019;4(2):1148–1155.
Li Y, Zhan X, Liu S, Lu H, Jiang R, Guo W, Chapman S, Ge Y, Solan B, Ding Y, et al. Self-supervised plant phenotyping by combining domain adaptation with 3D plant model simulations: Application to wheat leaf counting at seedling stage. Plant Phenomics. 2023;5:Article 0041.
Zenkl R et al. Outdoor plant segmentation with deep learning for high-throughput field phenotyping on a diverse wheat dataset. Front Plant Sci. 2021;12:774068.
Pradal C, Dufour-Kowalski S, Boudon F, Fournier C, Godin C. OpenAlea: A visual programming and component-based software platform for plant modelling. Funct Plant Biol. 2008;35(10):751–760.
Li J, Lu B-L. An adaptive image Euclidean distance. Pattern Recogn. 2009;42(3):349–357.
Zhou T, Dong Y, Huo B, Liu S, Ma Z. U-net and its applications in medical image segmentation: A review. J Image Graph. 2021;26:2058–2077.
Yin X-X, Sun L, Fu Y, Lu R, Zhang Y. U-net-based medical image segmentation. J Healthc Eng. 2022;2022:Article 4189781.
Wei Y, Liu X, Lei J, Yue R, Feng J. Multiscale feature U-net for remote sensing image segmentation. J Appl Remote Sens. 2022;16(1):Article 016507.
Chen LC, Papandreou, Kokkinos I, Murphy K, Yuille AL. DeepLab: Semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2018;40(4):834–848.
Liu L, Lu H, Li Y, Cao Z. High-throughput Rice density estimation from transplantation to Tillering stages using deep networks. Plant Phenomics. 2020;2020:Article 1375957.
Bai XD, Cao ZG, Wang Y, Yu ZH, Zhang XF, Li CN. Crop segmentation from images by morphology modeling in the CIE L*a*b* color space. Comput Electron Agric. 2013;99:21–34.
Hoyez H, Schockaert C, Rambach J, Mirbach B, Stricker D. Unsupervised image-to-image translation: A review. Sensors (Basel). 2022;22(21):Article 8540.
Zou K, Chen X, Wang Y, Zhang C, Zhang F. A modified U-net with a specific data argumentation method for semantic segmentation of weed images in the field. Comput Electron Agric. 2021;187(C):929.
Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0).