AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Estimating Compositions and Nutritional Values of Seed Mixes Based on Vision Transformers

Shamprikta Mehreen1( )Hervé Goëau2Pierre Bonnet2Sophie Chau3Julien Champ1Alexis Joly1
Inria, LIRMM, University Montpellier, CNRS, Montpellier, France
CIRAD, UMR AMAP, Montpellier, Occitanie, France
Chambre d’Agriculture - Haute Vienne, Limoges, Nouvelle-Aquitaine, France
Show Author Information

Abstract

The cultivation of seed mixtures for local pastures is a traditional mixed cropping technique of cereals and legumes for producing, at a low production cost, a balanced animal feed in energy and protein in livestock systems. By considerably improving the autonomy and safety of agricultural systems, as well as reducing their impact on the environment, it is a type of crop that responds favorably to both the evolution of the European regulations on the use of phytosanitary products and the expectations of consumers who wish to increase their consumption of organic products. However, farmers find it difficult to adopt it because cereals and legumes do not ripen synchronously and the harvested seeds are heterogeneous, making it more difficult to assess their nutritional value. Many efforts therefore remain to be made to acquire and aggregate technical and economical references to evaluate to what extent the cultivation of seed mixtures could positively contribute to securing and reducing the costs of herd feeding. The work presented in this paper proposes new Artificial Intelligence techniques that could be transferred to an online or smartphone application to automatically estimate the nutritional value of harvested seed mixes to help farmers better manage the yield and thus engage them to promote and contribute to a better knowledge of this type of cultivation. For this purpose, an original open image dataset has been built containing 4,749 images of seed mixes, covering 11 seed varieties, with which 2 types of recent deep learning models have been trained. The results highlight the potential of this method and show that the best-performing model is a recent state-of-the-art vision transformer pre-trained with self-supervision (Bidirectional Encoder representation from Image Transformer). It allows an estimation of the nutritional value of seed mixtures with a coefficient of determination R2 score of 0.91, which demonstrates the interest of this type of approach, for its possible use on a large scale.

References

1

Ofori F, Stern W. Cereal–legume intercropping systems. Adv Agron. 1987;41:41.

2
Parisot N. The genome of the cereal pest sitophilus oryzae: A transposable element haven. bioRxiv. 2021. https://doi.org/10.1101/2021.03.03.408021
3
Joly A, Goëau H, Bonnet P, Bakić V, Barbe J, Selmi S, Yahiaoui I, Carré J, Mouysset E, Molino JF, et al. Interactive plant identification based on social image data. Eco Inform. 2014;23:22–34 Special Issue on Multimedia in Ecology and Environment.
4
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Cham (Switzerland): Springer; 2015. p. 234–241.
5
He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. Venice (Italy): IEEE; 2017. p. 2961–2969.
6
Ge Z, Liu S, Wang F, Li Z, Sun J. Yolox: Exceeding yolo series in 2021. arXiv. 2021. https://doi.org/10.48550/arXiv.2107.08430
7

Zhang F, Lv Z, Zhang H, Guo J, Wang J, Lu T, Zhangzhong L. Verification of improved yolox model in detection of greenhouse crop organs: Considering tomato as example. Comput Electron Agric. 2023;205(C):Article 107582.

8

Sodjinou SG, Mohammadi V, Mahama ATS, Gouton P. A deep semantic segmentation-based algorithm to segment crops and weeds in agronomic color images. Inform Process Agric. 2022;9(5):355.

9

Liu X, Zhao D, Jia W, Ji W, Ruan C, Sun Y. Cucumber fruits detection in green-houses based on instance segmentation. IEEE Access. 2019;7:139635–139642.

10
Milioto A, Lottes P, Stachniss C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In: 2018 IEEE International Conference on Robotics And Automation (ICRA). Brisbane (Australia): IEEE; 2018. p. 2229–2235
11
O’Shea K, Nash R. An introduction to convolutional neural networks. arXiv. 2015. https://doi.org/10.48550/arXiv.1511.08458
12
Dosovitskiy A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv. 2020. https://doi.org/10.48550/arXiv.2010.11929
13
Carpeso: Concilier autonomie alimentaire et réduction significative des pesticides dans les systèmes de polycultures-elevage du sud-ouest de la france. https://haute-vienne.chambre-agriculture.fr/environnement/carpeso/.
14
Tumanyan N, Bar-Tal O, Bagon S, Dekel T. Splicing vit features for semantic appearance transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans (LA): IEEE; 2022. p. 10748–10757.
15
Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning. Long Beach (CA): PMLR; 2019. p. 6105–6114.
16
Xie Q, Luong MT, Hovy E, Le QV. Self-training with noisy student improves imagenet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle (WA): IEEE; 2020. p. 10687–10698.
17

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Poloksukin I. Attention is all you need. Adv Neural Info Process Syst. 2017;30.

18
Caron M. Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal (Canada): IEEE; 2021. p. 9650–9660.
19
He K, Chen X, Xie S, Li Y, Dollár P, Girshick R. Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans (LA): IEEE; 2022. p. 16,000–16,009.
20
Bao H, Dong L, Wei F. Beit: Bert pre-training of image transformers. arXiv. 2021. https://doi.org/10.48550/arXiv.2106.08254
21
Joyce JM, Kullback–Leibler divergence. In: International encyclopedia of statistical science. Berlin Heidelberg (Germany): Springer; 2011. p. 720–722.
22
Martins A, Astudillo R. From softmax to sparsemax: A sparse model of attention and multi-label classification. In: International Conference on Machine Learning. New York (NY): PMLR; 2016. p. 1614–1623.
23
Cubuk ED, Zoph B. Shlens J, Le QV. Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle (WA): IEEE; 2020. p. 702–703.
24
Cubuk ED, Zoph B, Mane D. Vasudevan V, Le QV. Autoaugment: Learning augmentation strategies from data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach (CA): IEEE; 2019. p. 113–123.
Plant Phenomics
Article number: 0112
Cite this article:
Mehreen S, Goëau H, Bonnet P, et al. Estimating Compositions and Nutritional Values of Seed Mixes Based on Vision Transformers. Plant Phenomics, 2023, 5: 0112. https://doi.org/10.34133/plantphenomics.0112

108

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 15 March 2023
Accepted: 20 October 2023
Published: 10 November 2023
© 2023 Shamprikta Mehreen et al. Exclusive licensee Nanjing Agricultural University. No claim to original U.S. Government Works.

Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0).

Return