Incremental Learning
310 papers with code • 21 benchmarks • 9 datasets
Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.
Libraries
Use these libraries to find Incremental Learning models and implementationsDatasets
Most implemented papers
Overcoming catastrophic forgetting in neural networks
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.
iCaRL: Incremental Classifier and Representation Learning
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.
Learning without Forgetting
We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
Gradient Episodic Memory for Continual Learning
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
End-to-End Incremental Learning
Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally.
Large Scale Incremental Learning
We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes.
Incremental Learning of Object Detectors without Catastrophic Forgetting
Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i. e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the initial training data.
Visual Memorability for Robotic Interestingness via Unsupervised Online Learning
In this paper, we explore the problem of interesting scene prediction for mobile robots.