Few-Shot Learning

1013 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Most implemented papers

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

cbfinn/maml ICML 2017

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

Prototypical Networks for Few-shot Learning

jakesnell/prototypical-networks NeurIPS 2017

We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.

Matching Networks for One Shot Learning

oscarknagg/few-shot NeurIPS 2016

Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.

A Closer Look at Few-shot Classification

wyharveychen/CloserLookFewShot ICLR 2019

Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.

Learning to Compare: Relation Network for Few-Shot Learning

floodsung/LearningToCompare_FSL CVPR 2018

Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.

On First-Order Meta-Learning Algorithms

openai/supervised-reptile 8 Mar 2018

This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i. e., learns quickly) when presented with a previously unseen task sampled from this distribution.

Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning

yinboc/few-shot-meta-baseline ICCV 2021

The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear.

The Power of Scale for Parameter-Efficient Prompt Tuning

google-research/prompt-tuning EMNLP 2021

More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned).

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

learnables/learn2learn 31 Jul 2017

In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.