Continual Learning

410 papers with code • 17 benchmarks • 21 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations

Most implemented papers

Overcoming catastrophic forgetting in neural networks

ContinualAI/avalanche 2 Dec 2016

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.

Progressive Neural Networks

ContinualAI/avalanche 15 Jun 2016

Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence.

Three scenarios for continual learning

GMvandeVen/continual-learning 15 Apr 2019

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

Learning without Forgetting

ContinualAI/avalanche 29 Jun 2016

We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.

Variational Continual Learning

nvcuong/variational-continual-learning ICLR 2018

This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.

Meta-Learning Representations for Continual Learning

Khurramjaved96/mrcl NeurIPS 2019

We show that it is possible to learn naturally sparse representations that are more effective for online updating.

Generative replay with feedback connections as a general strategy for continual learning

GMvandeVen/continual-learning 27 Sep 2018

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

Continual learning with hypernetworks

chrhenning/hypercl ICLR 2020

Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks.

Learning to Continually Learn

uvm-neurobotics-lab/ANML 21 Feb 2020

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.

Continual Learning Through Synaptic Intelligence

ganguli-lab/pathint ICML 2017

While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.