AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

WRA-Net: Wide Receptive Field Attention Network for Motion Deblurring in Crop and Weed Image

Chaeyeong YunYu Hwan KimSung Jae LeeSu Jin ImKang Ryoung Park( )
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
Show Author Information

Abstract

Automatically segmenting crops and weeds in the image input from cameras accurately is essential in various agricultural technology fields, such as herbicide spraying by farming robots based on crop and weed segmentation information. However, crop and weed images taken with a camera have motion blur due to various causes (e.g., vibration or shaking of a camera on farming robots, shaking of crops and weeds), which reduces the accuracy of crop and weed segmentation. Therefore, robust crop and weed segmentation for motion-blurred images is essential. However, previous crop and weed segmentation studies were performed without considering motion-blurred images. To solve this problem, this study proposed a new motion-blur image restoration method based on a wide receptive field attention network (WRA-Net), based on which we investigated improving crop and weed segmentation accuracy in motion-blurred images. WRA-Net comprises a main block called a lite wide receptive field attention residual block, which comprises modified depthwise separable convolutional blocks, an attention gate, and a learnable skip connection. We conducted experiments using the proposed method with 3 open databases: BoniRob, crop/weed field image, and rice seedling and weed datasets. According to the results, the crop and weed segmentation accuracy based on mean intersection over union was 0.7444, 0.7741, and 0.7149, respectively, demonstrating that this method outperformed the state-of-the-art methods.

References

1

Jiang Y, Li C. Convolutional neural networks for image-based high-throughput plant phenotyping: A review. Plant Phenomics. 2020;2020:4152816.

2

Li D, Li J, Xiang S, Pan A. PSegNet: Simultaneous semantic and instance segmentation for point clouds of plants. Plant Phenomics. 2022;2022:9787643.

3

Rawat S, Chandra AL, Desai SV, Balasubramanian VN, Ninomiya S, Guo W. How useful is image-based active learning for plant organ segmentation? Plant Phenomics. 2022;2022:9795275.

4

Wang Z, Zhang Z, Lu Y, Luo R, Niu Y, Yang X, Jing S, Ruan C, Zheng Y, Jia W. SE-COTR: A novel fruit segmentation model for green apples application in complex orchard. Plant Phenomics. 2022;2022:0005.

5
Lottes P, Behley J, Chebrolu N, Milioto A, Stachniss C, Joint stem detection and crop-weed classification for plant-specific treatment in precision farming. Paper presented at IEEE: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018 Oct 1–5; Madrid, Spain.
6
Kupyn O, Budzan V, Mykhailych M, Mishkin D, Matas J, DeblurGAN: Blind motion deblurring using conditional adversarial networks. Paper presented at: IEEE: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2018 Jun 18–23; Salt Lake City, UT.
7
Ronneberger O, Fischer P, Brox T, U-Net: Convolutional networks for biomedical image segmentation. Paper presented at: 2015 International Conference on Medical image Computing and Computer-Assisted Intervention (MICCAI); 2015 Oct 5–9; Munich, Germany.
8
Li N, Grift TE, Yuan T, Zhang C, Momin MdA, Li W. Image processing for crop/weed discrimination in fields with high weed pressure. Paper presented at: 2016 ASABE International Meeting, American Society of Agricultural and Biological Engineers; 2016 Jul 17–20; Orlando, FL.
9

McLachlan GJ. Mahalanobis distance. Resonance. 1999;4:20–26.

10

Lottes P, Hörferlin M, Sander S, Stachniss C. Effective vision-based classification for separating sugar beets and weeds for precision farming: Effective vision-based classification. J Field Robot. 2017;34(6):1160–1178.

11
Rouse JW Jr, Haas RH, Schell JA, Deering DW. Monitoring vegetation systems in the great plains with ERTS. NASA Spec Publ. 1973;309–317.
12

Zheng Y, Zhu Q, Huang M, Guo Y, Qin J. Maize and weed classification using color indices with support vector data description in outdoor fields. Comput Electron Agric. 2017;141:215–222.

13

Wu X, Xu W, Song Y, Cai M. A detection method of weed in wheat field on machine vision. Procedia Eng. 2011;15:1998–2003.

14

Tax DMJ, Duin RPW. Support vector domain description. Pattern Recogn Lett. 1999;20(11-13):1191–1199.

15
Milioto A, Lottes P, Stachniss C, Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. Paper presented at: IEEE: Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA); 2018 May 21–25; Brisbane, Australia.
16

Badrinarayanan V, Kendall A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12):2481–2495.

17
Paszke A, Chaurasia A, Kim S, Culurciello E, ENet: A deep neural network architecture for real-time semantic segmentation. arXiv. 2016. https://doi.org/10.48550/arXiv.1606.02147
18
Jegou S, Drozdzal M, Vazquez D, Romero A, Bengio Y, The one hundred layers tiramisu: Fully convolutional DenseNets for semantic segmentation. Paper presented at: IEEE: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2017 Jul 21–26; Honolulu, HI.
19

Khan A, Ilyas T, Umraiz M, Mannan ZI, Kim H. CED-Net: Crops and weeds segmentation for smart farming using a small cascaded encoder-decoder architecture. Electronics. 2020;9(10):1602.

20
Fawakherji M, Potena C, Bloisi DD, Imperoli M, Pretto A, Nardi D, UAV image based crop and weed distribution estimation on embedded GPU boards. Paper presented at: 2019 Computer Analysis of Images and Patterns (CAIP); 2019 Sep 3–5; Salerno, Italy.
21
Simonyan K, Zisserman A, Very deep convolutional networks for large-scale image recognition. Paper presented at: 2015 International Conference on Learning Representations (ICLR); 2015 May 7–9; San Diego, CA.
22
Brilhador A, Gutoski M, Hattori LT, de Souza Inacio A, Lazzaretti AE, Lopes HS, Classification of weeds and crops at the pixel-level using convolutional neural networks and data augmentation. Paper presented at: IEEE: Proceedings of the 2019 IEEE Latin American Conference on Computational Intelligence (LA-CCI); 2019 Nov 11–15; Guayaquil, Ecuador.
23

You J, Liu W, Lee J. A DNN-based semantic segmentation for detecting weed and crop. Comput Electron Agric. 2020;178:105750.

24
He K, Zhang X, Ren S, Sun J, Deep residual learning for image recognition. Paper presented at: IEEE: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27–30; Las Vegas, NV.
25
Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, et al., Attention U-Net: Learning where to look for the pancreas. Paper presented at: 2018 Medical Imaging with Deep Learning (MIDL), 2018 Jul 4–6; Haifa, Israel.
26
Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H, Dual attention network for scene segmentation. Paper presented at: IEEE: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15–20; Long Beach, CA.
27
Noh H, Hong S, Han B, Learning deconvolution network for semantic segmentation. Paper presented at IEEE: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV); 2015 Dec 7–13; Santiago, Chile.
28

Kim YH, Park KR. MTS-CNN: Multi-task semantic segmentation-convolutional neural network for detecting crops and weeds. Comput Electron Agric. 2022;199:107146.

29
WRA-Net and algorithm; https://github.com/chaeyeongyun/WRA-Net [accessed 28 Dec 2022].
30
Ulyanov D, Vedaldi A, Lempitsky V, Instance normalization: The missing ingredient for fast stylization, arXiv. 2017. https://doi.org/10.48550/arXiv.1607.08022
31
Chen L, Lu X, Zhang J, Chu X, Chen C, HINet: Half instance normalization network for image restoration. Paper presented at: IEEE: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2021 Jun 19–25; Nashville, TN.
32
Ioffe S, Szegedy C, Batch normalization: Accelerating deep network training by reducing internal covariate shift. Paper presented at: Proceedings of the 32nd International Conference on Machine Learning (ICML); 2015 Jul 6–11; Lille, France.
33
Yu J, Fan Y, Yang J, Xu N, Wang Z, Wang X, Huang T, Wide activation for efficient and accurate image super-resolution, arXiv. 2018. https://doi.org/10.48550/arXiv.1808.08718
34
De S, Smith SL. Batch normalization biases residual blocks towards the identity function in deep networks. Paper presented at: 2020 Neural Information Processing Systems (NeurIPS); 2020 Dec 6–12; Vancouver, Canada.
35
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H, MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv. 2017. https://doi.org/10.48550/arXiv.1704.04861
36
Huang G, Liu Z, Maaten L, Weinberger KQ, Densely connected convolutional networks. Paper presented at: IEEE: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21–26; Honolulu, HI.
37
Shi W, Caballero J, Huszár F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. Paper presented at: IEEE: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27–30; Las Vegas, NV.
38
Zhu X, Hu H, Lin S, Dai J, Deformable ConvNets V2: More deformable, better results. Paper presented at: IEEE: Proceedings of the 2019 IEEE/CVF Conference in Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15–20; Long Beach, CA.
39

Zhao H, Gallo O, Frosio I, Kautz J. Loss functions for image restoration with neural networks. IEEE Trans Comput Imaging. 2017;3(1):47–57.

40

Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–612.

41
Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ. Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Cham: Springer; 2017; vol. 10553; p. 240–248.
42
Chu X, Chen L, Chen C, Lu X, Improving image restoration by revisiting global information aggregation. Paper presented at: 2022 European Conference on Computer Vision (ECCV); 2022 Oct 23–27; Tel Aviv, Israel.
43
Haug S, Ostermann J, A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks. Paper presented at: 2014 European Conference on Computer Vision (ECCV) Workshops; 2014 Sep 6–7, 12; Zurich, Switzerland.
44

Chebrolu N, Lottes P, Schaefer A, Winterhalter W, Burgard W, Stachniss C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. Int J Robot Res. 2017;36(10):1045–1052.

45

Ma X, Deng X, Qi L, Jiang Y, Li H, Wang Y, Xing X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLOS ONE. 2019;14(4):e0215676.

46
JAI AD-130 GE. https://www.1stvision.com/cameras/models/JAI/AD-130GE [accessed 1 Dec 2022].
48
NVIDIA GeForce GTX 1070 TI; https://www.nvidia.com/ko-kr/geforce/10-series/ [accessed 1 Dec 2022].
49
50
Pytorch 1.12.1; https://pytorch.org/ [accessed 1 Dec 2022].
51
Kingma DP, Ba J, Adam: A method for stochastic optimization. Paper presented at: 2015 International Conference on Learning Representations (ICLR); 2015 May 7–9; San Diego, CA.
52
Loshchilov I, Hutter F, SGDR: Stochastic gradient descent with warm restarts. Paper presented at: 2017 International Conference on Learning Representations (ICLR); 2017 Apr 24–26; Palais des Congrès Neptune, Toulon, France.
53
Cho S-J, Ji S-W, Hong J-P, Jung S-W, Ko S-J, Rethinking coarse-to-fine approach in single image deblurring. Paper presented at: IEEE: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV); 2021 Oct 10–17; Montreal, Canada.
54
Jetson TX2. https://developer.nvidia.com/embedded/jetson-tx2 [accessed 1 Dec 2022].
55
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D, Grad-CAM: Visual explanations from deep networks via gradient-based localization. Paper presented at: IEEE: Proceedings of the 2017 IEEE/CVF International Conference on Computer Vision (ICCV); 2017 Oct 22–29; Venice, Italy.
56

Vinogradova K, Dibrov A, Myers G. Towards interpretable semantic segmentation via gradient-weighted class activation mapping (student abstract). Proc AAAI Conf Artif Intell. 2020;34(10):13943–13944.

57
Kupyn O, Martyniuk T, Wu J, Wang Z, DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. Paper presented at: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019 Oct 27–Nov 2; Seoul, South Korea.
58
https://developer.nvidia.com/embedded/jetson-tx2. Zamir SW, Arora A, Khan S, Hayat M, Khan FS, Yang M-H, Shao L, Multi-stage progressive image restoration. Paper presented at: IEEE: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20–25; Nashville, TN.
59
Chen L, Chu X, Zhang X, Sun J, Simple baselines for image restoration. Paper presented at: 2022 European Conference on Computer Vision (ECCV); 2022 Oct 23–27; Tel Aviv, Israel.
Plant Phenomics
Article number: 0031
Cite this article:
Yun C, Kim YH, Lee SJ, et al. WRA-Net: Wide Receptive Field Attention Network for Motion Deblurring in Crop and Weed Image. Plant Phenomics, 2023, 5: 0031. https://doi.org/10.34133/plantphenomics.0031

172

Views

12

Crossref

13

Web of Science

13

Scopus

0

CSCD

Altmetrics

Received: 12 January 2023
Accepted: 16 February 2023
Published: 05 April 2023
© 2023 Chaeyeong Yun et al. Exclusive licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License (CC BY 4.0).

Return