Continual Learning
821 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers
Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning
Deep representation learning methods struggle with continual learning, suffering from both catastrophic forgetting of useful units and loss of plasticity, often due to rigid and unuseful units.
InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning
Furthermore, InfLoRA designs this subspace to eliminate the interference of the new task on the old tasks, making a good trade-off between stability and plasticity.
Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation
However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting).
ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning
Panoptic segmentation, combining semantic and instance segmentation, stands as a cutting-edge computer vision task.
CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models
The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk CL tasks requiring reliable uncertainty estimation.
DS-AL: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning
The compensation stream is governed by a Dual-Activation Compensation (DAC) module.
G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning
The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.
A Unified and General Framework for Continual Learning
Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.
Predictive, scalable and interpretable knowledge tracing on structured domains
This requires estimates of both the learner's progress (''knowledge tracing''; KT), and the prerequisite structure of the learning domain (''knowledge mapping'').
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters
Continual learning can empower vision-language models to continuously acquire new knowledge, without the need for access to the entire historical dataset.