References(57)
[1]
Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[2]
A. Krizhevsky, I. Sutskeve, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, in Proc. 25th Int. Conf. Neural Inf. Process. Syst., Lake Tahoe, NV, USA, 2012, pp. 1097–1105.
[3]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556v2, 2015.
[4]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
[5]
M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, in Proc. 13th Eur. Conf. Comput. Vis., Zurich, Switzerland, 2014, pp. 818–833.
[6]
J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, Squeeze-and-excitation networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8, pp. 2011–2023, 2020.
[7]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. 2015 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, 2015, pp. 1–9.
[8]
F. Chollet, Xception: Deep learning with depthwise separable convolutions, in Proc. 2017 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, 2017, pp. 1800–1807.
[9]
M. Tan and Q. V. Le, EfficientNet: Rethinking model scaling for convolutional neural networks, in Proc. 36th Int. Conf. Mach. Learn. (ICML2019), Long Beach, CA, USA, 2019, pp. 6105–6114.
[10]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, and I. Goodfellow, Intriguing properties of neural networks, presented at 2014 2nd Int. Conf. Learn. Represent. (ICLR2014), Banff, Canada, 2014.
[11]
X. Yuan, P. He, Q. Zhu, and X. Li, Adversarial examples: Attacks and defenses for deep learning, IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 9, pp. 2805–2824, 2019.
[12]
J. Zhang and C. Li, Adversarial examples: Opportunities and challenges, IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 7, pp. 2578–2593, 2020.
[13]
N. Akhtar, A. Mian, N. Kardan, and M. Shah, Advances in adversarial attacks and defenses in computer vision: A survey, IEEE Access, vol. 9, pp. 155161–155196, 2021.
[14]
K. Ren, T. Zheng, Z. Qin, and X. Liu, Adversarial attacks and defenses in deep learning, Engineering, vol. 6, no. 3, pp. 346–360, 2020.
[15]
A. Serban, E. Poll, and J. Visser, Adversarial examples on object recognition: A comprehensive survey, ACM Comput. Surv., vol. 53, no. 3, pp. 1–38, 2020.
[16]
A. Kurakin, I. J. Goodfellow, and S. Bengio, Adversarial examples in the physical world, presented at 2017 Int. Conf. Learn. Represent. (ICLR2017), Toulon, France, 2017.
[17]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, The limitations of deep learning in adversarial settings, in Proc. 2016 IEEE Eur. Symp. Secur. Priv. (EuroS&P), Saarbruecken, Germany, 2016, pp. 372–387.
[18]
S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, DeepFool: A simple and accurate method to fool deep neural networks, in Proc. 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, 2016, pp. 2574–2582.
[19]
L. Gao, Z. Huang, J. Song, Y. Yang, and H. T. Shen, Push & pull: Transferable adversarial examples with attentive attack, IEEE Trans. Multimed., .
[20]
A. Chaturvedi and U. Garain, Mimic and fool: A task-agnostic adversarial attack, IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 4, pp. 1801–1808, 2021.
[21]
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Proc. 27th Int. Conf. Neural Infor. Process. Syst., Montreal, Canada, 2014, pp. 2672–2680.
[22]
D. P. Kingma and M. Welling, Auto-encoding variational bayes, presented at 2014 Int. Conf. Learn. Represent., Banff, Canada, 2014.
[23]
X. Liu and C. J. Hsieh, Rob-GAN: Generator, discriminator, and adversarial attacker, in Proc. 2019 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 2019, pp. 11226–11235.
[24]
J. Chen, H. Zheng, H. Xiong, S. Shen, and M. Su, MAG-GAN: Massive attack generator via GAN, Inf. Sci., vol. 536, pp. 67–90, 2020.
[25]
P. Yu, K. Song, and J. Lu, Generating adversarial examples with conditional generative adversarial net, in Proc. 2018 24th Int. Conf. Pattern Recognit. (ICPR), Beijing, China, 2018, pp. 676–681.
[26]
C. Xiao, B. Li, J. Y. Zhu, W. He, M. Liu, and D. Song, Generating adversarial examples with adversarial networks, in Proc. 27th Int. Joint Conf. Artif. Intell. (IJCAI-18), Stockholm, Sweden, 2018, pp. 3905–3911.
[27]
S. Jandial, P. Mangla, S. Varshney, and V. Balasubramanian, AdvGAN++: Harnessing latent layers for adversary generation, in Proc. 2019 IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), Seoul, Republic of Korea, 2019, pp. 2045–2048.
[28]
T. Yu, S. Wang, C. Zhang, Z. Wang, Y. Li, and X. Yu, Targeted adversarial examples generating method based on cVAE in black box settings, Chin. J. Electron., vol. 30, no. 5, pp. 866–875, 2021.
[29]
U. Upadhyay and P. Mukherjee, Generating out of distribution adversarial attack using latent space poisoning, IEEE Signal Process. Lett., vol. 28, pp. 523–527, 2021.
[30]
I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples, presented at 3rd Int. Conf. Learn. Represent. (ICLR2015), San Diego, CA, USA, 2015.
[31]
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, Synthesizing robust adversarial examples, in Proc. 35th Int. Conf. Mach. Learn., Stockholm, Sweden, 2018, pp. 284–293.
[32]
T. Deng and Z. Zeng, Generate adversarial examples by spatially perturbing on the meaningful area, Pattern Recognit. Lett., vol. 125, pp. 632–638, 2019.
[33]
Y. Xu, B. Du, and L. Zhang, Self-attention context network: Addressing the threat of adversarial attacks for hyperspectral image classification, IEEE Trans. Image Process., vol. 30, pp. 8671–8685, 2021.
[34]
P. Vidnerová and R. Neruda, Vulnerability of classifiers to evolutionary generated adversarial examples, Neural Netw., vol. 127, pp. 168–181, 2020.
[35]
T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks, in Proc. 2019 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 2019, pp. 4396–4405.
[36]
M. Y. Zhai, L. Chen, F. Tung, J. He, M. Nawhal, and G. Mori, Lifelong GAN: Continual learning for conditional image generation, in Proc. 2019 IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Seoul, Republic of Korea, 2019, pp. 2759–2768, 2019.
[37]
M. Y. Zhai, L. Chen, J. He, M. Nawhal, F. Tung, and G. Mori, Piggyback GAN: Efficient lifelong learning for image conditioned generation, in Proc. 16th Eur. Conf. Comput. Vis., Glasgow, UK, 2020, pp. 397–413.
[38]
M. Y. Zhai, L. Chen, and G. Mori, Hyper-LifelongGAN: Scalable lifelong learning for image conditioned generation, in Proc. 2021 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR2021), Nashville, TN, USA, 2021, pp. 2246–2255.
[39]
K. Sohn, X. Yan, and H. Lee, Learning structured output representation using deep conditional generative models, in Proc. 28th Int. Conf. Neural Infor. Process. Syst. (NIPS), Montreal, Canada, 2015, pp.3483–3491.
[40]
I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, Beta-VAE: Learning basic visual concepts with a constrained variational framework, presented at 5th Int. Conf. Learn. Represent., Toulon, France, 2017.
[41]
H. Shao, Z. Xiao, S. Yao, D. Sun, A. Zhang, S. Liu, T. Wang, J. Li, and T. Abdelzaher, ControlVAE: Tuning, analytical properties, and performance analysis, IEEE Trans. Pattern Anal. Mach. Intell., .
[42]
Z. Zhao, D. Dua, and S. Singh, Generating natural adversarial examples, presented at 6th Int. Conf. Learn. Represent., Vancouver, Canada, 2018.
[43]
Y. Song. R. Shu, N. Kushman, and S. Ermon, Constructing unrestricted adversarial examples with generative models, in Proc. 32nd Int. Conf. Neural Infor. Process. Syst. (NeurIPS 2018), Montréal, Canada, 2018, pp. 8322–8333.
[44]
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, A general framework for adversarial examples with objectives, ACM Trans. Priv. Secur., vol. 22, no. 3, pp. 1–30, 2019.
[45]
J. Liu, Y. Tian, R. Zhang, Y. Sun, and C. Wang, A two-stage generative adversarial networks with semantic content constraints for adversarial example generation, IEEE Access, vol. 8, pp. 205766–205777, 2020.
[46]
P. Tang, W. Wang, J. Lou, and L. Xiong, Generating adversarial examples with distance constrained adversarial imitation networks, IEEE Trans. Dependable Secur. Comput., .
[47]
C. Hu, X. Wu, and Z. Li, Generating adversarial examples with elastic-net regularized boundary equilibrium generative adversarial network, Pattern Recognit. Lett., vol. 140, pp. 281–287, 2020.
[48]
W. Zhang, Generating adversarial examples in one shot with image-to-image translation GAN, IEEE Access, vol. 7, pp. 151103–151119, 2019.
[49]
W. Jiang, Z. He, J. Zhan, W. Pan, and D. Adhikari, Research progress and challenges on application-driven adversarial examples: A survey, ACM Trans. Cyber Phys. Syst., vol. 5, no. 4, p. 39, 2021.
[50]
A. Odena, C. Olah, and J. Shlens, Conditional image synthesis with auxiliary classifier GANs, in Proc. 34th Int. Conf. Mach. Learn., Sydney, Australia, 2017, pp. 2642–2651.
[51]
L. Deng, The MNIST database of handwritten digit images for machine learning research, IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141–142, 2012.
[52]
H. Xiao, K. Rasul, and R. Vollgraf, Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms, arXiv preprint arXiv: 1708.07747, 2017.
[53]
A. Torralba, R. Fergus, and W. T. Freeman, 80 million tiny images: A large data set for nonparametric object and scene recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 11, pp. 1958–1970, 2008.
[54]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[55]
S. Zagoruyko and N. Komodakis, Wide residual networks, in Proc. British Mach. Vis. Conf. (BMVC 2016), York, UK, 2016, pp. 1–12.
[56]
C. Z. Wang, X. Lv, W. P. Ding, and X. D. Fan, No-reference image quality assessment with multi-scale weighted residuals and channel attention mechanism, Soft Comput., vol. 26, pp. 13449–13465, 2022.
[57]
X. Lv, C. Wang, X. Fan, Q. Leng, and X. Jiang, A novel image super-resolution algorithm based on multi-scale dense recursive fusion network, Neurocomputing, vol. 489, pp. 98–111, 2022.