1 code implementation • 17 Feb 2025 • Gido M. van de Ven
One of the most popular methods for continual learning with deep neural networks is Elastic Weight Consolidation (EWC), which involves computing the Fisher Information.
no code implementations • 23 Nov 2024 • Filip Ilievski, Barbara Hammer, Frank van Harmelen, Benjamin Paassen, Sascha Saralajew, Ute Schmid, Michael Biehl, Marianna Bolognesi, Xin Luna Dong, Kiril Gashteovski, Pascal Hitzler, Giuseppe Marra, Pasquale Minervini, Martin Mundt, Axel-Cyrille Ngonga Ngomo, Alessandro Oltramari, Gabriella Pasi, Zeynep G. Saribatur, Luciano Serafini, John Shawe-Taylor, Vered Shwartz, Gabriella Skitalinskaya, Clemens Stachl, Gido M. van de Ven, Thomas Villmann
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
no code implementations • 7 Jun 2024 • Jason Yoo, Yingchen He, Saeid Naderiparizi, Dylan Green, Gido M. van de Ven, Geoff Pleiss, Frank Wood
This work demonstrates that training autoregressive video diffusion models from a single, continuous video stream is not only possible but remarkably can also be competitive with standard offline training approaches given the same number of gradient steps.
no code implementations • 7 May 2024 • Hamed Hemati, Lorenzo Pellegrini, Xiaotian Duan, Zixuan Zhao, Fangfang Xia, Marc Masana, Benedikt Tscheschner, Eduardo Veas, Yuxiang Zheng, Shiji Zhao, Shao-Yuan Li, Sheng-Jun Huang, Vincenzo Lomonaco, Gido M. van de Ven
Continual learning (CL) provides a framework for training models in ever-evolving environments.
no code implementations • 8 Mar 2024 • Gido M. van de Ven, Nicholas Soures, Dhireesha Kudithipudi
This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data.
2 code implementations • 27 Dec 2023 • Sebastian Dziadzio, Çağatay Yıldız, Gido M. van de Ven, Tomasz Trzciński, Tinne Tuytelaars, Matthias Bethge
In a simple setting with direct supervision on the generative factors, we show how learning class-agnostic transformations offers a way to circumvent catastrophic forgetting and improve classification accuracy over time.
2 code implementations • 23 Nov 2023 • Sergi Masip, Pau Rodriguez, Tinne Tuytelaars, Gido M. van de Ven
Diffusion models are powerful generative models that achieve state-of-the-art performance in image synthesis.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
1 code implementation • 8 Nov 2023 • Timm Hess, Tinne Tuytelaars, Gido M. van de Ven
While there is some continual learning work that alters the optimization trajectory (e. g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary.
1 code implementation • 30 May 2023 • Michał Zając, Tinne Tuytelaars, Gido M. van de Ven
Class-incremental learning (CIL) is a particularly challenging variant of continual learning, where the goal is to learn to discriminate between all classes presented in an incremental fashion.
1 code implementation • 3 Apr 2023 • Timm Hess, Eli Verwimp, Gido M. van de Ven, Tinne Tuytelaars
Continual learning research has shown that neural networks suffer from catastrophic forgetting "at the output level", but it is debated whether this is also the case at the level of learned representations.
1 code implementation • NeurIPS 2021 • Ta-Chu Kao, Kristopher T. Jensen, Gido M. van de Ven, Alberto Bernacchia, Guillaume Hennequin
In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired.
2 code implementations • 20 Apr 2021 • Gido M. van de Ven, Zhe Li, Andreas S. Tolias
As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned and by using importance sampling to estimate the likelihoods p(x|y).
1 code implementation • 24 Nov 2020 • Shuang Li, Yilun Du, Gido M. van de Ven, Igor Mordatch
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems.
1 code implementation • 13 Aug 2020 • Gido M. van de Ven, Hava T. Siegelmann & Andreas S. Tolias
In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario.
1 code implementation • 27 Apr 2020 • Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe
But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.
8 code implementations • 15 Apr 2019 • Gido M. van de Ven, Andreas S. Tolias
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
5 code implementations • 27 Sep 2018 • Gido M. van de Ven, Andreas S. Tolias
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.
no code implementations • 27 Sep 2018 • Gido M. van de Ven, Andreas S. Tolias
To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred.