Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
To better address the difficulties in designing green fruit recognition techniques in machine vision systems, a new fruit detection model is proposed. This model is an optimization of the FCOS (full convolution one-stage object detection) algorithm, incorporating LSC (level scales, spaces, channels) attention blocks in the network structure, and named FCOS-LSC. The method achieves efficient recognition and localization of green fruit images affected by overlapping occlusions, lighting conditions, and capture angles. Specifically, the improved feature extraction network ResNet50 with added deformable convolution is used to fully extract green fruit feature information. The feature pyramid network (FPN) is employed to fully fuse low-level detail information and high-level semantic information in a cross-connected and top-down connected way. Next, the attention mechanisms are added to each of the 3 dimensions of scale, space (including the height and width of the feature map), and channel of the generated multiscale feature map to improve the feature perception capability of the network. Finally, the classification and regression subnetworks of the model are applied to predict the fruit category and bounding box. In the classification branch, a new positive and negative sample selection strategy is applied to better distinguish supervised signals by designing weights in the loss function to achieve more accurate fruit detection. The proposed FCOS-LSC model has 38.65M parameters, 38.72G floating point operations, and mean average precision of 63.0% and 75.2% for detecting green apples and green persimmons, respectively. In summary, FCOS-LSC outperforms the state-of-the-art models in terms of precision and complexity to meet the accurate and efficient requirements of green fruit recognition using intelligent agricultural equipment. Correspondingly, FCOS-LSC can be used to improve the robustness and generalization of the green fruit detection models.
Kamilaris A, Prenafeta-Boldú FX. Deep learning in agriculture: A survey. Comput Electron Agric. 2018;147:70–90.
Tian Y, Yang G, Wang Z, Li E, Liang Z. Instance segmentation of apple flowers using the improved mask R–CNN model. Biosyst Eng. 2020;193:264–278.
Silwal A, Davidson JR, Karkee M, Mo C, Zhang Q, Lewis K. Design, integration, and field evaluation of a robotic apple harvester. J Field Robot. 2017;34(6):1140–1159.
Xiong Y, Ge Y, Grimstad L, From PJ. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J Field Robot. 2020;37(2):202–224.
Jia W, Zhang Y, Lian J, Zheng Y, Zhao D, Li C. Apple harvesting robot under information technology: A review. Int J Adv Robot Syst. 2020;17(3):Article 925310.
Zhang K, Lammers K, Chu P, Li Z, Lu R. System design and control of an apple harvesting robot. Mechatronics. 2021;79:Article 102644.
Gené-Mola J, Vilaplana V, Rosell-Polo JR, Morros JR, Ruiz-Hidalgo J, Gregorio E. Multi-modal deep learning for Fuji apple detection using RGB-D cameras and their radiometric capabilities. Comput Electron Agric. 2019;162:689–698.
Montoya-Cavero L-E, de León TRD, Gómez-Espinosa A, Cabello JAE. Vision systems for harvesting robots: Produce detection and localization. Comput Electron Agric. 2021;192:Article 106562.
Tang Y, Chen M, Wang C, Luo L, Li J, Lian G, Zou X. Recognition and localization methods for vision-based fruit picking robots: A review. Front Plant Sci. 2020;11:Article 510.
Ji W, Chen G, Xu B, Meng X, Zhao D. Recognition method of green pepper in greenhouse based on least-squares support vector machine optimized by the improved particle swarm optimization. IEEE Access. 2019;7:119742–119754.
Arefi A, Motlagh AM, Mollazade K, Teimourlou RF. Recognition and localization of ripen tomato based on machine vision. Aust J Crop Sci. 2011;5(10):1144–1149.
Kurtulmus F, Lee WS, Vardar A. Green citrus detection using ‘eigenfruit’, color and circular Gabor texture features under natural outdoor conditions. Comput Electron Agric. 2011;78(2):140–149.
Jia W, Zhao D, Liu X, Tang S, Ruan C, Ji W. Apple recognition based on K-means and GA-RBF-LMS neural network applicated in harvesting robot. Trans Chin Soc Agric Eng. 2015;31(18):175–183.
Tian Y, Duan H, Luo R, Zhang Y, Jia W, Lian J, Zheng Y, Ruan C, Li C. Fast recognition and location of target fruit based on depth information. IEEE Access. 2019;7:170553–170563.
Ji W, Zhao D, Cheng F, Xu B, Zhang Y, Wang J. Automatic recognition vision system guided for apple harvesting robot. Comput Elec Eng. 2012;38(5):1186–1195.
Moallem P, Serajoddin A, Pourghassem H. Computer vision-based apple grading for golden delicious apples based on surface features. Inform process agric. 2017;4(1):33–40.
Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Proces Syst. 2015;28:1137–1149.
Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. Proc IEEE Conf Comput Vis Pattern Recognit. 2016;779–788.
Kong T, Sun F, Liu H, Jiang Y, Li L, Shi J. Foveabox: Beyound anchor-based object detection. IEEE Trans Image Process. 2020;29:7389–7398.
Oksuz K, Cam BC, Kalkan S, Akbas E. Imbalance problems in object detection: A review. IEEE Trans Pattern Anal Mach Intell. 2020;43(10):3388–3415.
Zhao ZQ, Zheng P, Xu S, Wu X. Object detection with deep learning: A review. IEEE Trans Neural Netw Learn Syst. 2019;30(11):3212–3232.
Minaee S, Boykov YY, Porikli F, Plaza AJ, Kehtarnavaz N, Terzopoulos D. Image segmentation using deep learning: A survey. IEEE Trans Pattern Anal Mach Intell. 2021;44(7):3523–3542.
Zhang J, Karkee M, Zhang Q, Zhang X, Yaqoob M, Fu L, Wang S. Multi-class object detection using faster R-CNN and estimation of shaking locations for automated shake-and-catch apple harvesting. Comput Electron Agric. 2020;173:Article 105384.
Tu S, Pang J, Liu H, Zhuang N, Chen Y, Zheng C, Wan H, Xue Y. Passion fruit detection and counting based on multiple scale faster R-CNN using RGB-D images. Precis Agric. 2020;21(5):1072–1091.
Bresilla K, Perulli GD, Boini A, Morandi B, Corelli Grappadelli L, Manfrini L. Single-shot convolution neural networks for real-time fruit detection within the tree. Front Plant Sci. 2019;10:Article 611.
Wang D, He D. Channel pruned YOLO V5s-based deep learning approach for rapid and accurate apple fruitlet detection before fruit thinning. Biosyst Eng. 2021;210:271–281.
Jia W, Wang Z, Zhang Z, Yang X, Hou S, Zheng Y. A fast and efficient green apple object detection model based on Foveabox. J King Saud Univ.-Comput Inform Sci. 2022;34(8):5156–5169.
Jia W, Zhang Z, Shao W, Hou S, Ji Z, Liu G, Yin X. FoveaMask: A fast and accurate deep learning model for green fruit instance segmentation. Comput Electron Agric. 2021;191:Article 106488.
Tian Z, Shen C, Chen H, He T. Fcos: A simple and strong anchor-free object detector. IEEE Trans Pattern Anal Mach Intell. 2020;44(4):1922–1933.
Russell BC, Torralba A, Murphy KP, Freeman WT. LabelMe: A database and web-based tool for image. Int J of Computer Vision. 2008;77(1):157–173.
Lin T-Y, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell. 2017;42(2):2980–2988.
Sun M, Xu L, Chen X, Ji Z, Zheng Y, Jia W. Bfp net: Balanced feature pyramid network for small apple detection in complex orchard environment. Plant Phenomics. 2022;2022:Article 9892464.
Wang Z, Zhang Z, Lu Y, Luo R, Niu Y, Yang X, Jing S, Ruan C, Zheng Y, Jia W. SE-COTR: A novel fruit segmentation model for green apples application in complex orchard. Plant Phenomics. 2022;2022:Article 0005.
Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0).