Continual Learning

822 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,664
7 papers
687
6 papers
457
See all 8 libraries.

Revisiting Neural Networks for Continual Learning: An Architectural Perspective

byyx666/archcraft 23 Apr 2024

This paper seeks to bridge this gap between network architecture design and CL, and to present a holistic study on the impact of network architectures on CL.

3
23 Apr 2024

QCore: Data-Efficient, On-Device Continual Calibration for Quantized Models -- Extended Version

decisionintelligence/qcore 22 Apr 2024

The first difficulty in enabling continual calibration on the edge is that the full training data may be too large and thus not always available on edge devices.

1
22 Apr 2024

Double Mixture: Towards Continual Event Detection from Speech

jodie-kang/doublemixture 20 Apr 2024

To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'

0
20 Apr 2024

BACS: Background Aware Continual Semantic Segmentation

mostafaelaraby/bacs-continual-semantic-segmentation 19 Apr 2024

Besides the common problem of classical catastrophic forgetting in the continual learning setting, CSS suffers from the inherent ambiguity of the background, a phenomenon we refer to as the "background shift'', since pixels labeled as background could correspond to future classes (forward background shift) or previous classes (backward background shift).

1
19 Apr 2024

Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation

wx-zhang/continual-learning-on-a-diet 19 Apr 2024

DietCL meticulously allocates computational budget for both types of data.

0
19 Apr 2024

Continual Offline Reinforcement Learning via Diffusion-based Dual Generative Replay

nju-rl/cugro 16 Apr 2024

Finally, by interleaving pseudo samples with real ones of the new task, we continually update the state and behavior generators to model progressively diverse behaviors, and regularize the multi-head critic via behavior cloning to mitigate forgetting.

1
16 Apr 2024

E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data

arefaz/e3-ensemble-of-expert-embedders-cvprwmf24 12 Apr 2024

To address these issues, we introduce the Ensemble of Expert Embedders (E3), a novel continual learning framework for updating synthetic image detectors.

0
12 Apr 2024

Scalable Language Model with Generalized Continual Learning

faceonlive/ai-research 11 Apr 2024

In this study, we introduce the Scalable Language Model (SLM) to overcome these limitations within a more challenging and generalized setting, representing a significant advancement toward practical applications for continual learning.

144
11 Apr 2024

Calibration of Continual Learning Models

faceonlive/ai-research 11 Apr 2024

Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.

144
11 Apr 2024

F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation

wjmacro/continualmt 7 Apr 2024

In the evolving landscape of Neural Machine Translation (NMT), the pretrain-then-finetune paradigm has yielded impressive results.

2
07 Apr 2024