Continual Learning
835 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers
BACS: Background Aware Continual Semantic Segmentation
Besides the common problem of classical catastrophic forgetting in the continual learning setting, CSS suffers from the inherent ambiguity of the background, a phenomenon we refer to as the "background shift'', since pixels labeled as background could correspond to future classes (forward background shift) or previous classes (backward background shift).
Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation
DietCL meticulously allocates computational budget for both types of data.
Continual Offline Reinforcement Learning via Diffusion-based Dual Generative Replay
Finally, by interleaving pseudo samples with real ones of the new task, we continually update the state and behavior generators to model progressively diverse behaviors, and regularize the multi-head critic via behavior cloning to mitigate forgetting.
E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data
To address these issues, we introduce the Ensemble of Expert Embedders (E3), a novel continual learning framework for updating synthetic image detectors.
Calibration of Continual Learning Models
Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.
Scalable Language Model with Generalized Continual Learning
In this study, we introduce the Scalable Language Model (SLM) to overcome these limitations within a more challenging and generalized setting, representing a significant advancement toward practical applications for continual learning.
F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
In the evolving landscape of Neural Machine Translation (NMT), the pretrain-then-finetune paradigm has yielded impressive results.
Data Stream Sampling with Fuzzy Task Boundaries and Noisy Labels
In the realm of continual learning, the presence of noisy labels within data streams represents a notable obstacle to model reliability and fairness.
DELTA: Decoupling Long-Tailed Online Continual Learning
A significant challenge in achieving ubiquitous Artificial Intelligence is the limited ability of models to rapidly learn new information in real-world scenarios where data follows long-tailed distributions, all while avoiding forgetting previously acquired knowledge.
Continual Learning with Weight Interpolation
Continual learning poses a fundamental challenge for modern machine learning systems, requiring models to adapt to new tasks while retaining knowledge from previous ones.