Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning
In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed.
Télécharger le manuscrit.
Bibtex@article{HuPatGri20224,
author = {Yuqing Hu and Stéphane Pateux and Vincent
Gripon},
title = {Squeezing Backbone Feature Distributions to
the Max for Efficient Few-Shot Learning},
journal = {Algorithms},
year = {2022},
volume = {15},
number = {5},
month = {April},
}
|
|
Vous êtes le 2133751ème visiteur
|