Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Deep neural networks (DNNs) are vulnerable to elaborately crafted and imperceptible adversarial perturbations. With the continuous development of adversarial attack methods, existing defense algorithms can no longer defend against them proficiently. Meanwhile, numerous studies have shown that vision transformer (ViT) has stronger robustness and generalization performance than the convolutional neural network (CNN) in various domains. Moreover, because the standard denoiser is subject to the error amplification effect, the prediction network cannot correctly classify all reconstruction examples. Firstly, this paper proposes a defense network (CVTNet) that combines CNNs and ViTs that is appended in front of the prediction network. CVTNet can effectively eliminate adversarial perturbations and maintain high robustness. Furthermore, this paper proposes a regularization loss (
Wu Y, Yang F, Xu Y, Ling H. Privacy-protective-GAN for privacy preserving face de-identification. Journal of Computer Science and Technology, 2019, 34(1): 47–60. DOI: 10.1007/s11390-019-1898-8.
Chen J, Yang X, Yin H, Ma M, Chen B, Peng J, Guo Y, Yin Z, Su H. AdvFAS: A robust face anti-spoofing framework against adversarial examples. Computer Vision and Image Understanding, 2023, 235: 103779. DOI: 10.1016/j.cviu.2023.103779.
Zou B W, Huang R T, Xu Z Z, Hong Y, Zhou G D. Language adaptation for entity relation classification via adversarial neural networks. Journal of Computer Science and Technology, 2021, 36(1): 207–220. DOI: 10.1007/s11390-020-9713-0.
Ma X, Niu Y, Gu L, Wang Y, Zhao Y, Bailey J, Lu F. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, 2021, 110: 107332. DOI: 10.1016/j.patcog.2020.107332.
Jia W, Lu Z, Yu R, Li L, Zhang H, Liu Z, Qu G. Fooling decision-based black-box automotive vision perception systems in physical world. IEEE Trans. Intelligent Transportation Systems, 2024, 25(7): 7081–7092. DOI: 10.1109/TITS.2023.3347860.
Akhtar N, Mian A, Kardan N, Shah M. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access, 2021, 9: 155161–155196. DOI: 10.1109/ACCESS.2021.3127960.
LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324. DOI: 10.1109/5.726791.
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial networks. Communications of the ACM, 2020, 63(11): 139–144. DOI: 10.1145/3422622.
Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. on Image Processing, 2017, 26(7): 3142–3155. DOI: 10.1109/TIP.2017.2662206.
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6): 84–90. DOI: 10.1145/3065386.
Shao R, Perera P, Yuen P C, Patel V M. Open-set adversarial defense with clean-adversarial mutual learning. International Journal of Computer Vision, 2022, 130(4): 1070–1087. DOI: 10.1007/s11263-022-01581-0.
Yang J, Li Z, Liu S, Hong B, Wang W. Joint contrastive learning and frequency domain defense against adversarial examples. Neural Computing and Applications, 2023, 35(25): 18623–18639. DOI: 10.1007/s00521-023-08688-6.
Comments on this article