1 code implementation • 23 Nov 2023 • Sergi Masip, Pau Rodriguez, Tinne Tuytelaars, Gido M. van de Ven
We demonstrate that our approach significantly improves the continual learning performance of generative replay with only a moderate increase in the computational costs.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a sub-field of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 8 Nov 2023 • Timm Hess, Tinne Tuytelaars, Gido M. van de Ven
Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far.
no code implementations • 30 May 2023 • Michał Zając, Tinne Tuytelaars, Gido M. van de Ven
Class-incremental learning (CIL) is a particularly challenging variant of continual learning, where the goal is to learn to discriminate between all classes presented in an incremental fashion.
no code implementations • 3 Apr 2023 • Timm Hess, Eli Verwimp, Gido M. van de Ven, Tinne Tuytelaars
While it is established that neural networks suffer from catastrophic forgetting ``at the output level'', it is debated whether this is also the case at the level of representations.
1 code implementation • NeurIPS 2021 • Ta-Chu Kao, Kristopher T. Jensen, Gido M. van de Ven, Alberto Bernacchia, Guillaume Hennequin
In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired.
1 code implementation • 20 Apr 2021 • Gido M. van de Ven, Zhe Li, Andreas S. Tolias
As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned and by using importance sampling to estimate the likelihoods p(x|y).
1 code implementation • 24 Nov 2020 • Shuang Li, Yilun Du, Gido M. van de Ven, Igor Mordatch
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems.
1 code implementation • 13 Aug 2020 • Gido M. van de Ven, Hava T. Siegelmann & Andreas S. Tolias
In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario.
1 code implementation • 27 Apr 2020 • Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe
But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.
8 code implementations • 15 Apr 2019 • Gido M. van de Ven, Andreas S. Tolias
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
no code implementations • 27 Sep 2018 • Gido M. van de Ven, Andreas S. Tolias
To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred.
5 code implementations • 27 Sep 2018 • Gido M. van de Ven, Andreas S. Tolias
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.