Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks.
#2 best model for Semantic Segmentation on PASCAL VOC 2012 test
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
The move from hand-designed features to learned features in machine learning has been wildly successful.
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
We introduce TensorFlow Quantum (TFQ), an open source library for the rapid prototyping of hybrid quantum-classical models for classical or quantum data.
This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i. e., learns quickly) when presented with a previously unseen task sampled from this distribution.
#7 best model for Few-Shot Image Classification on OMNIGLOT - 1-Shot Learning
We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.