About

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Datasets

Greatest papers with code

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Global Convergence and Generalization Bound of Gradient-Based Meta-Learning with Deep Neural Nets

25 Jun 2020facebookresearch/higher

Gradient-based meta-learning (GBML) with deep neural nets (DNNs) has become a popular approach for few-shot learning.

FEW-SHOT LEARNING OMNIGLOT

The Omniglot challenge: a 3-year progress report

9 Feb 2019brendenlake/omniglot

Three years ago, we released the Omniglot dataset for one-shot learning, along with five challenge tasks and a computational model that addresses these tasks.

CLASSIFICATION OMNIGLOT ONE-SHOT LEARNING

Matching Networks for One Shot Learning

NeurIPS 2016 oscarknagg/few-shot

Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.

FEW-SHOT IMAGE CLASSIFICATION LANGUAGE MODELLING METRIC LEARNING OMNIGLOT ONE-SHOT LEARNING

Meta-Learning for Semi-Supervised Few-Shot Classification

ICLR 2018 y2l/meta-transfer-learning-tensorflow

To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes.

CLASSIFICATION HIERARCHICAL STRUCTURE META-LEARNING OMNIGLOT

Differentiable plasticity: training plastic neural networks with backpropagation

ICML 2018 uber-common/differentiable-plasticity

How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?

META-LEARNING OMNIGLOT

Data Augmentation Generative Adversarial Networks

ICLR 2018 AntreasAntoniou/DAGAN

The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items.

DATA AUGMENTATION FEW-SHOT LEARNING OMNIGLOT

Learning to cluster in order to transfer across domains and tasks

ICLR 2018 GT-RIPL/L2C

The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.

OMNIGLOT TRANSFER LEARNING UNSUPERVISED DOMAIN ADAPTATION

VAE with a VampPrior

19 May 2017jmtomczak/vae_vampprior

In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short.

OMNIGLOT