Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Few-shot classification models trained with clean samples poorly classify samples from the real world with various scales of noise. To enhance the model for recognizing noisy samples, researchers usually utilize data augmentation or use noisy samples generated by adversarial training for model training. However, existing methods still have problems: (ⅰ) The effects of data augmentation on the robustness of the model are limited. (ⅱ) The noise generated by adversarial training usually causes overfitting and reduces the generalization ability of the model, which is very significant for few-shot classification. (ⅲ) Most existing methods cannot adaptively generate appropriate noise. Given the above three points, this paper proposes a noise-robust few-shot classification algorithm, VADA—Variational Adversarial Data Augmentation. Unlike existing methods, VADA utilizes a variational noise generator to generate an adaptive noise distribution according to different samples based on adversarial learning, and optimizes the generator by minimizing the expectation of the empirical risk. Applying VADA during training can make few-shot classification more robust against noisy data, while retaining generalization ability. In this paper, we utilize FEAT and ProtoNet as baseline models, and accuracy is verified on several common few-shot classification datasets, including MiniImageNet, TieredImageNet, and CUB. After training with VADA, the classification accuracy of the models increases for samples with various scales of noise.
Wang, Y.; Yao, Q.; Kwok, J. T.; Ni, L. M. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys Vol. 53, No. 3, Article No. 63, 2020.
Shorten, C.; Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. Journal of Big Data Vol. 6, No. 1, Article No. 60, 2019.
Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, No. 7, 13001–13008, 2020.
Cho, W.; Kim, E. Improving augmentation efficiency for few-shot learning. IEEE Access Vol. 10, 17697–17706, 2022.
Akrami, H.; Joshi, A. A.; Li, J.; Aydöre, S.; Leahy, R. M. A robust variational autoencoder using beta divergence. Knowledge-Based Systems Vol. 238, Article No. 107886, 2022.
Harford, S.; Karim, F.; Darabi, H. Generating adversarial samples on multivariate time series using variational autoencoders. IEEE/CAA Journal of Automatica Sinica Vol. 8, No. 9, 1523–1538, 2021.
Dhamija, L.; Garg, U. An adaptive randomized and secured approach against adversarial attacks. Information Security Journal: A Global Perspective Vol. 32, No. 6, 401–416, 2023.
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.