Adversarial attacks craft adversarial examples (AEs) to fool convolution neural networks. The mainstream gradient-based attacks, based on first-order optimization methods, encounter bottlenecks to generate high transferable AEs attacking unknown models. Considering that the high-order method would be a better optimization algorithm, we attempt to build high-order adversarial attacks to improve the transferability of AEs. However, solving the optimization problem of adversarial attacks directly via higher-order derivatives is computationally difficult and may face the non-convergence problem. So, we leverage the Runge−Kutta (RK) method, which is an accurate yet efficient high-order numerical solver of ordinary differential equation (ODE), to approximate high-order adversarial attacks. We first induce the gradient descent process of gradient-based attack as an ODE, and then numerically solve the ODE via RK method to develop approximated high-order adversarial attacks. Concretely, through ignoring the higher-order infinitesimal item in the Taylor expansion of the loss, the proposed method utilizes a linear combination of the present gradient and looking-ahead gradients to replace the computationally expensive high-order derivatives, and yields a relatively fast equivalent high-order adversarial attack. The proposed high-order adversarial attack can be extensively integrated with transferability augmentation methods to generate high transferable AEs. Extensive experiments demonstrate that the RK-based attacks exhibit higher transferability than the state of the arts.
- Article type
- Year
- Co-author


Environmental assessments are critical for ensuring the sustainable development of human civilization. The integration of artificial intelligence (AI) in these assessments has shown great promise, yet the "black box" nature of AI models often undermines trust due to the lack of transparency in their decision-making processes, even when these models demonstrate high accuracy. To address this challenge, we evaluated the performance of a transformer model against other AI approaches, utilizing extensive multivariate and spatiotemporal environmental datasets encompassing both natural and anthropogenic indicators. We further explored the application of saliency maps as a novel explainability tool in multi-source AI-driven environmental assessments, enabling the identification of individual indicators' contributions to the model's predictions. We find that the transformer model outperforms others, achieving an accuracy of about 98% and an area under the receiver operating characteristic curve (AUC) of 0.891. Regionally, the environmental assessment values are predominantly classified as level Ⅱ or Ⅲ in the central and southwestern study areas, level Ⅳ in the northern region, and level Ⅴ in the western region. Through explainability analysis, we identify that water hardness, total dissolved solids, and arsenic concentrations are the most influential indicators in the model. Our AI-driven environmental assessment model is accurate and explainable, offering actionable insights for targeted environmental management. Furthermore, this study advances the application of AI in environmental science by presenting a robust, explainable model that bridges the gap between machine learning and environmental governance, enhancing both understanding and trust in AI-assisted environmental assessments.