Browse > Methodology > Continual Learning

Continual Learning

63 papers with code · Methodology

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Three scenarios for continual learning

15 Apr 2019GMvandeVen/continual-learning

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

CONTINUAL LEARNING

Generative replay with feedback connections as a general strategy for continual learning

27 Sep 2018GMvandeVen/continual-learning

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

CONTINUAL LEARNING

Gradient Episodic Memory for Continual Learning

NeurIPS 2017 facebookresearch/GradientEpisodicMemory

One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.

CONTINUAL LEARNING

Practical Deep Learning with Bayesian Principles

NeurIPS 2019 team-approx-bayes/dl-with-bayes

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

CONTINUAL LEARNING DATA AUGMENTATION

Practical Deep Learning with Bayesian Principles

NeurIPS 2019 team-approx-bayes/dl-with-bayes

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

CONTINUAL LEARNING DATA AUGMENTATION

Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

30 Oct 2018GT-RIPL/Continual-Learning-Benchmark

Continual learning has received a great deal of attention recently with several approaches being proposed.

CONTINUAL LEARNING L2 REGULARIZATION

Training and Inference with Integers in Deep Neural Networks

ICLR 2018 boluoweifenda/WAGE

Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics.

CONTINUAL LEARNING

Meta-Learning Representations for Continual Learning

NeurIPS 2019 Khurramjaved96/mrcl

We also added the pseudo-code of the algorithms to the main paper as requested by reviewers.

CONTINUAL LEARNING META-LEARNING