Class Incremental Learning

143 papers with code • 1 benchmarks • 1 datasets

Incremental learning of a sequence of tasks when the task-ID is not available at test time.


Use these libraries to find Class Incremental Learning models and implementations
15 papers
8 papers


Most implemented papers

Overcoming catastrophic forgetting in neural networks

ContinualAI/avalanche 2 Dec 2016

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

iCaRL: Incremental Classifier and Representation Learning

srebuffi/iCaRL CVPR 2017

A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.

Learning without Forgetting

ContinualAI/avalanche 29 Jun 2016

We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.

Three scenarios for continual learning

GMvandeVen/continual-learning 15 Apr 2019

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

pokaxpoka/deep_Mahalanobis_detector NeurIPS 2018

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

On Tiny Episodic Memories in Continual Learning

facebookresearch/agem 27 Feb 2019

But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks.

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

k-gyuhak/more 20 Aug 2022

Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.

RMM: Reinforced Memory Management for Class-Incremental Learning

yaoyaoliu/rmm NeurIPS 2021

Class-Incremental Learning (CIL) [40] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase.

Efficient Lifelong Learning with A-GEM

facebookresearch/agem ICLR 2019

In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.