Continual Learning

238 papers with code • 14 benchmarks • 13 datasets

Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.

Source: Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Source: Lifelong Machine Learning

Source: Continual lifelong learning with neural networks: A review

Greatest papers with code

A Combinatorial Perspective on Transfer Learning

deepmind/deepmind-research NeurIPS 2020

Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks.

Continual Learning Transfer Learning

Continual Unsupervised Representation Learning

deepmind/deepmind-research NeurIPS 2019

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

Continual Learning Unsupervised Representation Learning

River: machine learning for streaming data in Python

creme-ml/creme 8 Dec 2020

It is the result from the merger of the two most popular packages for stream learning in Python: Creme and scikit-multiflow.

Continual Learning

Three scenarios for continual learning

GMvandeVen/continual-learning 15 Apr 2019

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

class-incremental learning Incremental Learning

Generative replay with feedback connections as a general strategy for continual learning

GMvandeVen/continual-learning 27 Sep 2018

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

Continual Learning

Avalanche: an End-to-End Library for Continual Learning

ContinualAI/avalanche 1 Apr 2021

Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.

Continual Learning

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

google-research/meta-dataset NeurIPS 2019

We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature.

Active Learning Continual Learning +3

Gradient Episodic Memory for Continual Learning

facebookresearch/GradientEpisodicMemory NeurIPS 2017

One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.

Continual Learning

Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

GT-RIPL/Continual-Learning-Benchmark 30 Oct 2018

Continual learning has received a great deal of attention recently with several approaches being proposed.

Continual Learning L2 Regularization

Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

NVlabs/DeepInversion CVPR 2020

We introduce DeepInversion, a new method for synthesizing images from the image distribution used to train a deep neural network.

Continual Learning Network Pruning +1