Incremental Learning

310 papers with code • 21 benchmarks • 9 datasets

Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.


Use these libraries to find Incremental Learning models and implementations
19 papers
11 papers
2 papers

Most implemented papers

Overcoming catastrophic forgetting in neural networks

ContinualAI/avalanche 2 Dec 2016

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.

iCaRL: Incremental Classifier and Representation Learning

srebuffi/iCaRL CVPR 2017

A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.

Learning without Forgetting

ContinualAI/avalanche 29 Jun 2016

We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.

Three scenarios for continual learning

GMvandeVen/continual-learning 15 Apr 2019

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

Gradient Episodic Memory for Continual Learning

facebookresearch/GradientEpisodicMemory NeurIPS 2017

One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

pokaxpoka/deep_Mahalanobis_detector NeurIPS 2018

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

End-to-End Incremental Learning

mmasana/FACIL ECCV 2018

Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally.

Large Scale Incremental Learning

ContinualAI/avalanche CVPR 2019

We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes.

Incremental Learning of Object Detectors without Catastrophic Forgetting

kshmelkov/incremental_detectors ICCV 2017

Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i. e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the initial training data.

Visual Memorability for Robotic Interestingness via Unsupervised Online Learning

wang-chen/interestingness ECCV 2020

In this paper, we explore the problem of interesting scene prediction for mobile robots.