549 papers with code • 2 benchmarks • 16 datasets

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Greatest papers with code

Searching for Efficient Multi-Scale Architectures for Dense Image Prediction

tensorflow/models NeurIPS 2018

Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks.

Image Classification Meta-Learning +2

Meta-Learning Update Rules for Unsupervised Representation Learning

tensorflow/models ICLR 2019

Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.

Meta-Learning Unsupervised Representation Learning

Meta Pseudo Labels

google-research/google-research CVPR 2021

We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90. 2% on ImageNet, which is 1. 6% better than the existing state-of-the-art.

Meta-Learning Semi-Supervised Image Classification

Meta-Learning Bidirectional Update Rules

google-research/google-research 10 Apr 2021

We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule.


ES-MAML: Simple Hessian-Free Meta Learning

google-research/google-research ICLR 2020

We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).


Meta Back-translation

google-research/google-research ICLR 2021

Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.

Machine Translation Meta-Learning

Meta-Learning Requires Meta-Augmentation

google-research/google-research NeurIPS 2020

Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task.


Meta-Learning without Memorization

google-research/google-research ICLR 2020

If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes.

Few-Shot Image Classification Meta-Learning

Data Valuation using Reinforcement Learning

google-research/google-research ICML 2020

To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL).

Domain Adaptation Meta-Learning

NoRML: No-Reward Meta Learning

google-research/google-research 4 Mar 2019

To this end, we introduce a method that allows for self-adaptation of learned policies: No-Reward Meta Learning (NoRML).