Journal Home > Just Accepted

Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.

Publication history
Copyright
Rights and permissions

Publication history

Received: 27 July 2023
Revised: 03 December 2023
Accepted: 26 January 2024
Available online: 07 March 2024

Copyright

© The author(s) 2024.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return