Incremental Learning
383 papers with code • 22 benchmarks • 9 datasets
Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.
Libraries
Use these libraries to find Incremental Learning models and implementationsDatasets
Most implemented papers
Visual Memorability for Robotic Interestingness via Unsupervised Online Learning
In this paper, we explore the problem of interesting scene prediction for mobile robots.
A Multi-Head Model for Continual Learning via Out-of-Distribution Replay
Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.
RMM: Reinforced Memory Management for Class-Incremental Learning
Class-Incremental Learning (CIL) [40] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase.
CuMF_SGD: Fast and Scalable Matrix Factorization
overcomes the issue of memory discontinuity.
ExprGAN: Facial Expression Editing with Controllable Expression Intensity
To address these limitations, we propose an Expression Generative Adversarial Network (ExprGAN) for photo-realistic facial expression editing with controllable expression intensity.
Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge.
Scalable Deep Learning Logo Detection
Existing logo detection methods usually consider a small number of logo classes and limited images per class with a strong assumption of requiring tedious object bounding box annotations, therefore not scalable to real-world dynamic applications.
Revisiting Distillation and Incremental Classifier Learning
To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature.
Sentence Embedding Alignment for Lifelong Relation Extraction
We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks.
Class-incremental Learning via Deep Model Consolidation
The idea is to first train a separate model only for the new classes, and then combine the two individual models trained on data of two distinct set of classes (old classes and new classes) via a novel double distillation training objective.