57 papers with code • 1 benchmarks • 3 datasets
One-shot learning is the task of learning information about object categories from a single training example.
( Image credit: Siamese Neural Networks for One-shot Image Recognition )
In this work we present Ludwig, a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code.
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.
In order to create a personalized talking head model, these works require training on a large dataset of images of a single person.
In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).
Ranked #1 on Few-Shot Image Classification on ImageNet (1-shot)
The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available.
Ranked #1 on One-Shot Learning on MNIST (using extra training data)