Continual Learning
1017 papers with code • 32 benchmarks • 34 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Most implemented papers
Overcoming catastrophic forgetting in neural networks
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence.
Learning without Forgetting
We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.
Variational Continual Learning
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
Continual learning with hypernetworks
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks.
Gradient Episodic Memory for Continual Learning
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.
Continual Learning Through Synaptic Intelligence
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.
On Tiny Episodic Memories in Continual Learning
But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks.
Meta-Learning Representations for Continual Learning
We show that it is possible to learn naturally sparse representations that are more effective for online updating.