Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
In recent years, deep learning models have become the standard for agricultural computer vision. Such models are typically fine-tuned to agricultural tasks using model weights that were originally fit to more general, non-agricultural datasets. This lack of agriculture-specific fine-tuning potentially increases training time and resource use, and decreases model performance, leading to an overall decrease in data efficiency. To overcome this limitation, we collect a wide range of existing public datasets for 3 distinct tasks, standardize them, and construct standard training and evaluation pipelines, providing us with a set of benchmarks and pretrained models. We then conduct a number of experiments using methods that are commonly used in deep learning tasks but unexplored in their domain-specific applications for agriculture. Our experiments guide us in developing a number of approaches to improve data efficiency when training agricultural deep learning models, without large-scale modifications to existing pipelines. Our results demonstrate that even slight training modifications, such as using agricultural pretrained model weights, or adopting specific spatial augmentations into data processing pipelines, can considerably boost model performance and result in shorter convergence time, saving training resources. Furthermore, we find that even models trained on low-quality annotations can produce comparable levels of performance to their high-quality equivalents, suggesting that datasets with poor annotations can still be used for training, expanding the pool of currently available datasets. Our methods are broadly applicable throughout agricultural deep learning and present high potential for substantial data efficiency improvements.
Sa I, Ge Z, Dayoub F, Upcroft B, Perez T, McCool C. DeepFruits: A fruit detection system using deep neural networks. Sensors. 2016;16:1222.
Jeon HY, Tian LF, Zhu H. Robust crop and weed segmentation under uncontrolled outdoor illumination. Sensors (Basel). 2011;11:6270–6283.
Sharada PM, Hughes DP, Salathé M. Using deep learning for image-based plant disease detection. Front Plant Sci. 2016;7:1419.
Ümit A, Uçar M, Akyol K, Uçar E. Plant leaf disease classification using EfficientNet deep learning model. Ecol Inform. 2021;61:101182.
Elsherbiny O, Zhou L, Feng L, Qiu Z. Integration of visible and thermal imagery with an artificial neural network approach for robust forecasting of canopy water content in Rice. Remote Sens. 2021;13(9):1785.
Wang C, Liu B, Liu L, Zhu Y, Hou J, Liu P, Li X P. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif Intell Rev. 2021;54:5205–5253.
Nevavuori P, Narra N, Linna P, Lipping T. Crop yield prediction using multitemporal UAV data and Spatio-temporal deep learning models. Remote Sens. 2020;12(23):2000.
Li M, Zhang Z, Lei L, Wang X, Guo X. Agricultural greenhouses detection in high-resolution satellite images based on convolutional neural networks: Comparison of faster R-CNN, YOLO v3 and SSD. Sensors (Basel). 2020;20:4938.
Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010;22:1345–1359.
Nowakowski A, Mrziglod J, Spiller D, Bonifacio R, Ferrari I, Mathieu PP, Garcia-Herranz M, Kim D-H. Crop type mapping by using transfer learning. Intl J Appl Earth Observ Geoinform. 2021;98:102313.
Moon T, Son JE. Knowledge transfer for adapting pre-trained deep neural models to predict different greenhouse environments based on a low quantity of data. Comput Electron Agric. 2021;185:106136.
Zheng Y-Y, Kong J-L, Jin X-B, Wang X-Y, ST-L, Zuo M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors (Basel). 2019;19(5):1058.
Sahili ZA, Awad M. The power of transfer learning in agricultural applications: AgriNet. Front Plant Sci. 2022;13:992700.
Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. 2019;6:60.
Wang L, Wang J, Liu Z, Zhu J, Qin F. Evaluation of a deep-learning model for multispectral remote sensing of land use and crop classification. The Crop J. 2022;10(5):1435–1451.
Moreira G, Magalhães SA, Pinho T, dos Santos FN, Cunha M. Benchmark of deep learning and a proposed HSV colour space models for the detection and classification of greenhouse tomato. Agronomy. 2022;12(2):356.
Lu Y, Young S. A survey of public datasets for computer vision tasks in precision agriculture. Comput Electron Agric. 2020;178:105760.
Buslaev A, Iglovikov VI, Khvedchenya E, Parinov A, Druzhinin M, Kalinin AA. Albumentations: Fast and flexible image augmentations. Information. 2020;11(2):125.
Too EC, Yujian L, Njuki S, Yingchun L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput Electron Agric. 2019;161:272–279.
Bresilla K, Perulli GD, Boini A, Morandi B, Grappadelli LC, Manfrini L. Single-shot convolution neural networks for real-time fruit detection within the tree. Front Plant Sci. 2019;10:611.
Sodjinou SG, Mohammadi V, Mahama ATS, Gouton P. A deep semantic segmentation-based algorithm to segment crops and weeds in agronomic color images. Inform Process Agric. 2022;9(3):355–364.
Peng Y, Wang A, Liu J, Faheem M. A comparative study of semantic segmentation models for identification of grape with different varieties. Agriculture. 2021;11(10):997.
Zhang W, Chen K, Wang J, Shi Y, Guo W. Easy domain adaptation method for filling the species gap in deep learning-based fruit detection. Horticul Res. 2021;8:1–13.
Su D, Kong H, Qiao Y, Sukkarieh S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics. Comput Electron Agric. 2021;190:106418.
Beck MA, Liu C-Y, Bidinosti CP, Henry CJ, Godee CM, Ajmani M. An embedded system for the automated generation of labeled plant images to enable machine learningapplications in agriculture. PLOS ONE. 2020;15:e0243923.
Santos Ferreira, Alessandrodos, Daniel Matte Freitas, Gercina Gonalves da Silva, Hemerson Pistori, and Marcelo Theophilo Folhes. Weed detection in soybean crops using ConvNets. Comput Electron Agric. 2017;143:314–324.
Alencastre-Miranda M, Davidson JR, Johnson RM, Waguespack H, Krebs HI. Robotics for sugarcane cultivation: Analysis of billet quality using computer vision. IEEE Robot Autom Lett. 2018;3(4):3828–3835.
Espejo-Garcia B, Mylonas N, Athanasakos L, Fountas S, Vasilakoglou I. Towards weeds identification assistance through transfer learning. Comput Electron Agric. 2020;171:105306.
Olsen A, Konovalov DA, Philippa B, Ridd P, Wood JC, Johns J, Banks W, Girgenti B, Kenny O, Whinney J, et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci Rep. 2019;9:2058.
Teimouri N, Dyrmann M, Nielsen PR, Mathiassen SK, Somerville GJ, Jørgensen RN. Weed growth stage estimator using deep convolutional neural networks. Sensors. 2018;18(5):1580.
Sa I, Chen Z, Popović M, Khanna R, Liebisch F, Nieto J, Siegwart R. weedNet: Dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robot Autom Lett. 2018;3(1):588–595.
Dias PA, Tabb A, Medeiros H. Multispecies fruit flower detection using a refined semantic segmentation network. IEEE Robot Autom Lett. 2018;3:3003–3010.
Häni N, Roy P, Isler V. MinneApple: A benchmark dataset for apple detection and segmentation. IEEE Robot Autom Lett. 2019;5(2):852–858.
Khan A, Ilyas T, Umraiz M, Mannan ZI, Kim H. CED-net: Crops and weeds segmentation for smart farming using a small cascaded encoder-decoder architecture. Electronics. 2020;9(10):1602.
Gené-Mola J, Vilaplana V, Rosell-Polo JR, Morros J-R, Ruiz-Hidalgo J, Gregorio E. KFuji RGB-DS database: Fuji apple multi-modal images for fruit detection with color, depth and range-corrected IR data. Data Brief. 2019;25:104289.
Koirala A, Walsh KB, Wang Z, McCarthy C. Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘MangoYOLO’. Precis Agric. 2019;20:1107–1135.
Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0).