Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks.
#2 best model for Semantic Segmentation on Cityscapes (using extra training data)
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
The move from hand-designed features to learned features in machine learning has been wildly successful.
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i. e., learns quickly) when presented with a previously unseen task sampled from this distribution.
#5 best model for Few-Shot Image Classification on OMNIGLOT - 1-Shot Learning
Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.
However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.