While biological intelligence grows organically as new knowledge is gathered throughout life, Artificial Neural Networks forget catastrophically whenever they face a changing training data distribution.
By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training.
In this paper we propose a new, simple, CL algorithm that focuses on solving the current task in a way that might facilitate the learning of the next ones.
This work investigates the entanglement between Continual Learning (CL) and Transfer Learning (TL).
The staple of human intelligence is the capability of acquiring knowledge in a continuous fashion.
This work explores Continual Semi-Supervised Learning (CSSL): here, only a small fraction of labeled input examples are shown to the learner.
In Continual Learning, a Neural Network is trained on a stream of data whose distribution shifts over time.
Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable.
Ranked #12 on Continual Learning on ASC (19 tasks)