Class Incremental Learning

208 papers with code • 6 benchmarks • 1 datasets

Incremental learning of a sequence of tasks when the task-ID is not available at test time.

Libraries

Use these libraries to find Class Incremental Learning models and implementations
21 papers
1,657
16 papers
681
9 papers
495
2 papers
459

Datasets


Most implemented papers

Large Scale Incremental Learning

ContinualAI/avalanche CVPR 2019

We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes.

Avalanche: an End-to-End Library for Continual Learning

ContinualAI/avalanche 1 Apr 2021

Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.

Learning to Prompt for Continual Learning

google-research/l2p CVPR 2022

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.

Dark Experience for General Continual Learning: a Strong, Simple Baseline

aimagelab/mammoth NeurIPS 2020

Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable.

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

RaptorMai/online-continual-learning 22 Mar 2021

Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting.

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

aimagelab/mammoth ICLR 2022

In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

k-gyuhak/more 20 Aug 2022

Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.

RMM: Reinforced Memory Management for Class-Incremental Learning

yaoyaoliu/rmm NeurIPS 2021

Class-Incremental Learning (CIL) [40] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase.

Efficient Lifelong Learning with A-GEM

facebookresearch/agem ICLR 2019

In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.

Class-incremental Learning via Deep Model Consolidation

mmasana/FACIL 19 Mar 2019

The idea is to first train a separate model only for the new classes, and then combine the two individual models trained on data of two distinct set of classes (old classes and new classes) via a novel double distillation training objective.