Unsupervised Few-Shot Learning

12 papers with code • 0 benchmarks • 0 datasets

In contrast to supervised few-shot learning, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot learning.

Most implemented papers

Self-Supervision Can Be a Good Few-Shot Learner

bbbdylan/unisiam 19 Jul 2022

Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification

indy-lab/ProtoTransfer 19 Jun 2020

Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their augmentations closely together.

Program synthesis performance constrained by non-linear spatial relations in Synthetic Visual Reasoning Test

anish-lu-yihe/pySVRT 18 Nov 2019

Here we re-considered the human and machine experiments, because they followed different protocols and yielded different statistics.

Rethinking Class Relations: Absolute-relative Supervised and Unsupervised Few-shot Learning

ojss/samptransfer CVPR 2021

The majority of existing few-shot learning methods describe image relations with binary labels.

Diversity Helps: Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation

WonderSeven/ULDA 13 Apr 2020

Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.

Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks

hanlu-nju/revisiting-uml 30 Nov 2020

Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes.

Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning

db-Lee/Meta-GMVAE ICLR 2021

Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors.

UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

AutodeskAILab/UVStyle-Net ICCV 2021

Boundary Representations (B-Reps) are the industry standard in 3D Computer Aided Design/Manufacturing (CAD/CAM) and industrial design due to their fidelity in representing stylistic details.

Trip-ROMA: Self-Supervised Learning with Triplets and Random Mappings

wenbinlee/trip-roma 22 Jul 2021

However, in small data regimes, we can not obtain a sufficient number of negative pairs or effectively avoid the over-fitting problem when negatives are not used at all.

Self-Attention Message Passing for Contrastive Few-Shot Learning

ojss/samptransfer 12 Oct 2022

Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision.