84 papers with code • 1 benchmarks • 4 datasets
One-shot learning is the task of learning information about object categories from a single training example.
( Image credit: Siamese Neural Networks for One-shot Image Recognition )
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning."
The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available.
In order to create a personalized talking head model, these works require training on a large dataset of images of a single person.
In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).
We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels.