[1]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[2]
J. Redmon and A. Farhadi, YOLOv3: An incremental improvement, arXiv preprint arXiv: 1804.02767, 2018.
[3]
O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in Proc. 18th Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, pp. 234–241.
[4]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, Intriguing properties of neural networks, arXiv preprint arXiv: 1312.6199, 2014.
[5]
I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples, arXiv preprint arXiv: 1412.6572, 2015.
[6]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, The limitations of deep learning in adversarial settings, in Proc. 2016 IEEE European Symp. Security and Privacy (EuroS&P), Saarbruecken, Germany, 2016, pp. 372–387.
[7]
S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, DeepFool: A simple and accurate method to fool deep neural networks, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 2574–2582.
[8]
A. Kurakin, I. Goodfellow, and S. Bengio, Adversarial machine learning at scale, arXiv preprint arXiv: 1611.01236, 2017.
[9]
N. Carlini and D. Wagner, Towards evaluating the robustness of neural networks, in Proc. 2017 IEEE Symp. Security and Privacy (SP), San Jose, CA, USA, 2017, pp. 39–57.
[10]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towards deep learning models resistant to adversarial attacks, arXiv preprint arXiv: 1706.06083 , 2019.
[11]
W. Chen, Z. Zhang, X. Hu, and B. Wu, Boosting decision-based black-box adversarial attacks with random sign flip, in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 276–293.
[12]
A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, Black-box adversarial attacks with limited queries and information, arXiv preprint arXiv: 1804.08598, 2018.
[13]
J. Lin, C. Song, K. He, L. Wang, and J. E. Hopcroft, Nesterov accelerated gradient and scale invariance for adversarial attacks, arXiv preprint arXiv: 1908.06281, 2020.
[14]
Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, Boosting adversarial attacks with momentum, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 9185–9193.
[15]
C. Wan, B. Ye, and F. Huang, PID-based approach to adversarial attacks, in Proc. 35th AAAI Conf. Artificial Intelligence, Virtual Event, 2021, pp. 10033–10040.
[16]
X. Wang and K. He, Enhancing the transferability of adversarial attacks through variance tuning, in Proc. 2021 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 1924–1933.
[17]
Y. Liu, X. Chen, C. Liu, and D. Song, Delving into transferable adversarial examples and black-box attacks, arXiv preprint arXiv: 1611.02770, 2017.
[18]
H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. El Ghaoui, and M. I. Jordan, Theoretically principled trade-off between robustness and accuracy, arXiv preprint arXiv: 1901.08573, 2019.
[19]
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, Ensemble adversarial training: Attacks and defenses, arXiv preprint arXiv: 1705.07204, 2020.
[20]
S. Fang, J. Li, X. Lin, and R. Ji, Learning to learn transferable attack, in Proc. 36th AAAI Conf. on Artificial Intelligence, Virtual Event, 2022, pp. 571–579.
[21]
X. Wang, J. Lin, H. Hu, J. Wang, and K. He, Boosting adversarial transferability through enhanced momentum, arXiv preprint arXiv: 2103.10609, 2021.
[22]
J. Zou, Y. Duan, B. Li, W. Zhang, Y. Pan, and Z. Pan, Making adversarial examples more transferable and indistinguishable, in Proc. 36th AAAI Conf. Artificial Intelligence, Virtual Event, 2022, pp. 3662–3670.
[23]
C. Xie, Z. Zhang, Y. Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille, Improving transferability of adversarial examples with input diversity, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 2725–2734.
[24]
Y. Dong, T. Pang, H. Su, and J. Zhu, Evading defenses to transferable adversarial examples by translation-invariant attacks, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 4307–4316.
[25]
Z. Wang, H. Guo, Z. Zhang, W. Liu, Z. Qin, and K. Ren, Feature importance-aware transferable adversarial attacks, arXiv preprint arXiv: 2107.14185, 2022.
[26]
J. Zhang, W. Wu, J. T. Huang, Y. Huang, W. Wang, Y. Su, and M. R. Lyu, Improving adversarial transferability via neuron attribution-based attacks, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 14973–14982.
[27]
S. Tang, R. Gong, Y. Wang, A. Liu, J. Wang, X. Chen, F. Yu, X. Liu, D. Song, A. Yuille, et al., RobustART: Benchmarking robustness on architecture design and training techniques, arXiv preprint arXiv: 2109.05211, 2022.
[28]
C. Xie, Y. Wu, L. van der Maaten, A. L. Yuille, and K. He, Feature denoising for improving adversarial robustness, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 501–509.
[29]
M. Naseer, S. Khan, M. Hayat, F. S. Khan, and F. Porikli, A self-supervised approach for adversarial robustness, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 259–268.
[31]
B. Li, C. Chen, W. Wang, and L. Carin, Second-order adversarial attack and certifiable robustness, arXiv preprint arxiv. 1809.03113, 2019.
[33]
R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, Neural ordinary differential equations, in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 6572–6583.
[34]
B. Kim, B. Chudomelka, J. Park, J. Kang, Y. Hong, and H. J. Kim, Robust neural networks inspired by strong stability preserving Runge-Kutta methods, in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 416–432.
[35]
E. Hairer and G. Wanner, Solving Ordinary Differential Equations II. 2nd ed. Berlin, Germany: Springer 1996.
[37]
C. Zhang, P. Benz, A. Karjauv, J. W. Cho, K. Zhang, and I. S. Kweon, Investigating top-k white-box and transferable black-box attack, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 15064–15073.
[39]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 2818–2826.
[40]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, Inception-v4, inception-ResNet and the impact of residual connections on learning, in Proc. 31st AAAI Conf. Artificial Intelligence, San Francisco, CA, USA, 2017, pp. 4278–4284.
[41]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269.
[42]
N. Ma, X. Zhang, H. T. Zheng, and J. Sun, Shufflenet v2: Practical guidelines for efficient CNN architecture design, in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 122–138.
[43]
F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, Defense against adversarial attacks using high-level representation guided denoiser, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 1778–1787.
[44]
X. Jia, X. Wei, X. Cao, and H. Foroosh, ComDefend: An efficient image compression model to defend adversarial examples, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 6077–6085.
[45]
Z. Liu, Q. Liu, T. Liu, N. Xu, X. Lin, Y. Wang, and W. Wen, Feature distillation: DNN-oriented JPEG compression against adversarial examples, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 860–868.