Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Recent advancements in deep learning have led to significant breakthroughs across various fields. However, these methods often require extensive labeled data for optimal performance, posing challenges and high costs in practical applications. Addressing this issue, Few-Shot Learning (FSL) is introduced. FSL aims to learn effectively from limited labeled samples and generalize well during testing. This paper provides a comprehensive survey of FSL, reviewing prominent deep learning based approaches of FSL. We define FSL through literature review in machine learning and specify the “N-way K-shot” paradigm to distinguish it from related learning challenges. Next, we classify FSL methods by analyzing the Vapnik−Chervonenkis dimension of neural networks. It underscores the necessity for models with abundant labeled examples and finite hypothesis space to generalize well to new and unseen instances. We categorize FSL methods into three types based on strategies to increase labeled samples or reduce hypothesis space: data augmentation, model-based methods, and algorithm-optimized approaches. Using this taxonomy, we review various methods and evaluate their strengths and weaknesses. We also present a comparison of these techniques as summarized in this paper, using benchmark datasets. Moreover, we delve into specific sub-tasks within FSL, such as applications in computer vision and robotics. Lastly, we examine the limitations, unique challenges, and future directions of FSL, aiming to offer a thorough understanding of this rapidly evolving field.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Comments on this article