One-shot learning is the task of learning information about object categories from a single training example.
( Image credit: Siamese Neural Networks for One-shot Image Recognition )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this work we present Ludwig, a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code.
IMAGE CAPTIONING IMAGE CLASSIFICATION LANGUAGE MODELLING MACHINE TRANSLATION MULTI-LABEL CLASSIFICATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION NATURAL LANGUAGE UNDERSTANDING ONE-SHOT LEARNING SENTIMENT ANALYSIS SPEAKER VERIFICATION TEXT CLASSIFICATION TIME SERIES FORECASTING VISUAL QUESTION ANSWERING
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.
In order to create a personalized talking head model, these works require training on a large dataset of images of a single person.
In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).
Ranked #1 on Few-Shot Image Classification on ImageNet (1-shot)
Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning."