no code implementations • 24 Jul 2024 • Ali Hummos, Felipe del Río, Brabeeba Mien Wang, Julio Hurtado, Cristian B. Calderon, Guangyu Robert Yang
We show that gradients backpropagated through a neural network to a task representation layer are an efficient heuristic to infer current task demands, a process we refer to as gradient-based inference (GBI).
no code implementations • 11 Jul 2024 • Clea Rebillard, Julio Hurtado, Andrii Krutsylo, Lucia Passaro, Vincenzo Lomonaco
This work proposes Continual Visual Mapping (CVM), an approach that continually ground vision representations to a knowledge space extracted from a fixed Language model.
no code implementations • 9 Mar 2024 • Rudy Semola, Julio Hurtado, Vincenzo Lomonaco, Davide Bacciu
This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand.
no code implementations • 22 Sep 2023 • Eric Nuertey Coleman, Julio Hurtado, Vincenzo Lomonaco
However, one limitation of this scenario is that users cannot modify the internal knowledge of the model, and the only way to add or modify internal knowledge is by explicitly mentioning it to the model during the current interaction.
2 code implementations • 20 Aug 2023 • Albin Soutif--Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado, Hamed Hemati, Vincenzo Lomonaco, Joost Van de Weijer
Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream.
no code implementations • 16 Jun 2023 • Felipe del Rio, Julio Hurtado, Cristian Buc, Alvaro Soto, Vincenzo Lomonaco
One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting.
no code implementations • 29 Jan 2023 • Julio Hurtado, Dario Salvati, Rudy Semola, Mattia Bosio, Vincenzo Lomonaco
In this work, we present a brief introduction to predictive maintenance, non-stationary environments, and continual learning, together with an extensive review of the current state of applying continual learning in real-world applications and specifically in predictive maintenance.
1 code implementation • 26 Jan 2023 • Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth
We propose two stochastic stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable control parameters.
no code implementations • CVPR 2023 • Andrés Villa, Juan León Alcázar, Motasem Alfarra, Kumail Alhamoud, Julio Hurtado, Fabian Caba Heilbron, Alvaro Soto, Bernard Ghanem
In this paper, we address the problem of continual learning for video data.
no code implementations • 3 Oct 2022 • Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
Lifelong language learning seeks to have models continuously learn multiple tasks in a sequential order without suffering from catastrophic forgetting.
no code implementations • 7 Jul 2022 • Alain Raymond-Saez, Julio Hurtado, Alvaro Soto
Curriculum Learning is a powerful training method that allows for faster and better training in some settings.
1 code implementation • 4 Jul 2022 • Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu
This paper introduces Memory Outlier Elimination (MOE), a method for identifying and eliminating outliers in the memory buffer by choosing samples from label-homogeneous subpopulations.
1 code implementation • 18 Apr 2022 • Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
The ability to continuously learn remains elusive for deep learning models.
3 code implementations • 2 Aug 2021 • Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Pau Rodriguez, Matthew D Riemer, Julio Hurtado, Khimya Khetarpal, Ryan Lindeborg, Lucas Cecchi, Timothée Lesort, Laurent Charlin, Irina Rish, Massimo Caccia
We propose a taxonomy of settings, where each setting is described as a set of assumptions.
1 code implementation • NeurIPS 2021 • Julio Hurtado, Alain Raymond-Saez, Alvaro Soto
On the other hand, a set of trainable masks provides the key mechanism to selectively choose from the KB relevant weights to solve each task.
no code implementations • 1 Jan 2021 • Julio Hurtado, Alain Raymond, Alvaro Soto
As a working hypothesis, we speculate that during learning some weights focus on mining patterns from frequent examples while others are in charge of memorizing rare long-tail samples.