Browse > Methodology > Meta-Learning

Meta-Learning

152 papers with code ยท Methodology

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Leaderboards

You can find evaluation results in the subtasks. You can also submitting evaluation metrics for this task.

Latest papers without code

Query-efficient Meta Attack to Deep Neural Networks

ICLR 2020

Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by only using output feedback of the models and the corresponding input queries.

META-LEARNING

Meta-Learning Initializations for Image Segmentation

ICLR 2020

While meta-learning approaches that utilize neural network representations have made progress in few-shot image classification, reinforcement learning, and, more recently, image semantic segmentation, the training algorithms and model architectures have become increasingly specialized to the few-shot domain.

FEW-SHOT IMAGE CLASSIFICATION FEW-SHOT LEARNING SEMANTIC SEGMENTATION

Meta Label Correction for Learning with Weak Supervision

ICLR 2020

Leveraging weak or noisy supervision for building effective machine learning models has long been an important research problem.

META-LEARNING

NORML: Nodal Optimization for Recurrent Meta-Learning

ICLR 2020

Gradient-based meta-learning methods aims to do just that, however recent work have shown that the effectiveness of these approaches are primarily due to feature reuse and very little has to do with priming the system for rapid learning (learning to make effective weight updates on unseen data distributions).

META-LEARNING

Learning to Learn via Gradient Component Corrections

ICLR 2020

The context parameter of GECCO is updated to generate a low-rank corrective term for the network gradients.

META-LEARNING

Meta-RCNN: Meta Learning for Few-Shot Object Detection

ICLR 2020

Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data.

FEW-SHOT OBJECT DETECTION META-LEARNING OBJECT CLASSIFICATION

Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Leanring Beyond Global Prior

ICLR 2020

Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption.

META-LEARNING

Robust Few-Shot Learning with Adversarially Queried Meta-Learners

ICLR 2020

Previous work on adversarially robust neural networks requires large training sets and computationally expensive training procedures.

FEW-SHOT LEARNING META-LEARNING TRANSFER LEARNING

Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning

ICLR 2020

Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.

META-LEARNING

A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms

ICLR 2020

We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge.

META-LEARNING