AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (11.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access | Just Accepted

Adversarial attack on object detection via object feature-wise attention and perturbation extraction

Wei Xue1Xiaoyan Xia1Pengcheng Wan2Ping Zhong2Xiao Zheng1( )

1 School of Computer Science and Technology, Anhui University of Technology, Maanshan 243032, China

2 National Key Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha 410073, China

Show Author Information

Abstract

Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.

Tsinghua Science and Technology
Cite this article:
Xue W, Xia X, Wan P, et al. Adversarial attack on object detection via object feature-wise attention and perturbation extraction. Tsinghua Science and Technology, 2024, https://doi.org/10.26599/TST.2024.9010029

408

Views

171

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 27 July 2023
Revised: 03 December 2023
Accepted: 26 January 2024
Available online: 07 March 2024

© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return