Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive Learning

Unsupervised meta-learning aims to learn generalizable knowledge across a distribution of tasks constructed from unlabeled data. Here, the main challenge is how to construct diverse tasks for meta-learning without label information; recent works have proposed to create, e.g., pseudo-labeling via pretrained representations or creating synthetic samples via generative models. However, such a task construction strategy is fundamentally limited due to heavy reliance on the immutable pseudo-labels during meta-learning and the quality of the representations or the generated samples. To overcome the limitations, we propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo), for few-shot classification. We are inspired by the recent self-supervised learning literature; PsCo utilizes a momentum network and a queue of previous batches to improve pseudo-labeling and construct diverse tasks in a progressive manner. Our extensive experiments demonstrate that PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks. We also validate that PsCo is easily scalable to a large-scale benchmark, while recent prior-art meta-schemes are not.

PDF Abstract 6th Workshop 2022 PDF 6th Workshop 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) PsCo Accuracy 46.70 # 15
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) PsCo Accuracy 63.26 # 15

Methods


No methods listed for this paper. Add relevant methods here