Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Identification of plant diseases is crucial for the healthy growth of plants. Notably, the identification depends on the details of images and should adapt to the variability of real-world conditions, which imposes high demands on the convergence and generalization capabilities of optimizers for deep neural networks (DNNs). However, traditional optimizers need help ensuring effective convergence and achieving satisfactory generalization. To address this issue, we interpret the optimization process of DNNs as an initial-value problem associated with the gradient flow of an ordinary differential equation (ODE), which is solved by discretization using a high-order multistep method. Then, an improved stochastic gradient descent (SGD)-based optimizer is proposed and characterized by high-order accuracy and zero stability, thereby enhancing convergence to zero-gradient points. The construction of this optimizer is equivalent to increasing the learning rate, which enhances generalization while ensuring convergence. Plant disease identification relies on detailed analysis of plant images, necessitating a well-converged optimization method. Furthermore, generalizing to unseen cases is crucial for effective plant disease identification. In experiments of plant disease identification, the proposed method demonstrates effective convergence and satisfactory generalization, demonstrating significant practical utility. The method provides a new perspective for designing optimizers that enable DNNs to achieve excellent performance and reliability.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Comments on this article