Continual Learning
822 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers
Revisiting Neural Networks for Continual Learning: An Architectural Perspective
This paper seeks to bridge this gap between network architecture design and CL, and to present a holistic study on the impact of network architectures on CL.
QCore: Data-Efficient, On-Device Continual Calibration for Quantized Models -- Extended Version
The first difficulty in enabling continual calibration on the edge is that the full training data may be too large and thus not always available on edge devices.
Double Mixture: Towards Continual Event Detection from Speech
To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'
BACS: Background Aware Continual Semantic Segmentation
Besides the common problem of classical catastrophic forgetting in the continual learning setting, CSS suffers from the inherent ambiguity of the background, a phenomenon we refer to as the "background shift'', since pixels labeled as background could correspond to future classes (forward background shift) or previous classes (backward background shift).
Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation
DietCL meticulously allocates computational budget for both types of data.
Continual Offline Reinforcement Learning via Diffusion-based Dual Generative Replay
Finally, by interleaving pseudo samples with real ones of the new task, we continually update the state and behavior generators to model progressively diverse behaviors, and regularize the multi-head critic via behavior cloning to mitigate forgetting.
E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data
To address these issues, we introduce the Ensemble of Expert Embedders (E3), a novel continual learning framework for updating synthetic image detectors.
Scalable Language Model with Generalized Continual Learning
In this study, we introduce the Scalable Language Model (SLM) to overcome these limitations within a more challenging and generalized setting, representing a significant advancement toward practical applications for continual learning.
Calibration of Continual Learning Models
Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.
F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
In the evolving landscape of Neural Machine Translation (NMT), the pretrain-then-finetune paradigm has yielded impressive results.