Few-Shot Image Classification
139 papers with code • 75 benchmarks • 19 datasets
Few-shot image classification is the task of doing image classification with only a few examples for each category (typically < 6 examples).
( Image credit: Learning Embedding Adaptation for Few-Shot Learning )
Libraries
Use these libraries to find Few-Shot Image Classification models and implementationsDatasets
Most implemented papers
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
Matching Networks for One Shot Learning
Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.
Learning Transferable Visual Models From Natural Language Supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.
On First-Order Meta-Learning Algorithms
This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i. e., learns quickly) when presented with a previously unseen task sampled from this distribution.
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Few-shot classification refers to learning a classifier for new classes given only a few examples.
Learning to Compare: Relation Network for Few-Shot Learning
Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.
How to train your MAML
The field of few-shot learning has recently seen substantial advancements.
Meta-Learning with Differentiable Convex Optimization
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks.
Towards a Neural Statistician
We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.