Incremental learning of a sequence of tasks when the task-ID is not available at test time.
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones.
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Ranked #2 on Out-of-Distribution Detection on MS-1M vs. IJB-C
In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario.
A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model.
It leverages initial classifier weights which provide a strong representation of past classes because they are trained with all class data.
Most existing algorithms make two strong hypotheses which reduce the realism of the incremental scenario: (1) new data are assumed to be readily annotated when streamed and (2) tests are run with balanced datasets while most real-life datasets are actually imbalanced.
The problem is non trivial if the agent runs on a limited computational budget and has a bounded memory of past data.