Unsupervised Few-Shot Image Classification
14 papers with code • 4 benchmarks • 2 datasets
In contrast to (supervised) few-shot image classification, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot image classification.
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their augmentations closely together.
The majority of existing few-shot learning methods describe image relations with binary labels.
Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.
Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes.
Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors.
In this work, we prove that the core reason for this is lack of a clustering-friendly property in the embedding space.