no code implementations • 7 May 2024 • Hamed Hemati, Lorenzo Pellegrini, Xiaotian Duan, Zixuan Zhao, Fangfang Xia, Marc Masana, Benedikt Tscheschner, Eduardo Veas, Yuxiang Zheng, Shiji Zhao, Shao-Yuan Li, Sheng-Jun Huang, Vincenzo Lomonaco, Gido M. van de Ven
Continual learning (CL) provides a framework for training models in ever-evolving environments.
1 code implementation • 30 May 2023 • Stefan Leitner, M. Jehanzeb Mirza, Wei Lin, Jakub Micorek, Marc Masana, Mateusz Kozinski, Horst Possegger, Horst Bischof
We propose to store these affine parameters as a memory bank for each weather condition and plug-in their weather-specific parameters during driving (i. e. test time) when the respective weather conditions are encountered.
1 code implementation • 19 Apr 2022 • M. Jehanzeb Mirza, Marc Masana, Horst Possegger, Horst Bischof
This catastrophic forgetting is typically addressed via incremental learning approaches which usually re-train the model by either keeping a memory bank of training samples or keeping a copy of the entire model or model parameters for each scenario.
1 code implementation • 22 Jun 2021 • Albin Soutif--Cormerais, Marc Masana, Joost Van de Weijer, Bartłomiej Twardowski
We also define a new forgetting measure for class-incremental learning, and see that forgetting is not the principal cause of low performance.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
1 code implementation • 28 Oct 2020 • Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.
no code implementations • 13 Jul 2020 • David Berga, Marc Masana, Joost Van de Weijer
We hypothesize that disentangled feature representations suffer less from catastrophic forgetting.
no code implementations • 4 Jul 2020 • Marc Masana, Bartłomiej Twardowski, Joost Van de Weijer
The influence of class orderings in the evaluation of incremental learning has received very little attention.
no code implementations • 23 Jan 2020 • Marc Masana, Tinne Tuytelaars, Joost Van de Weijer
To allow already learned features to adapt to the current task without changing the behavior of these features for previous tasks, we introduce task-specific feature normalization.
1 code implementation • 18 Sep 2019 • Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, Tinne Tuytelaars
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase.
no code implementations • WS 2018 • Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loïc Barrault, Kai Wang, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation.
1 code implementation • 16 Aug 2018 • Marc Masana, Idoia Ruiz, Joan Serrat, Joost Van de Weijer, Antonio M. Lopez
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently.
no code implementations • 27 Jun 2018 • Aymen Azaza, Joost Van de Weijer, Ali Douik, Marc Masana
Therefore, we extend object proposal methods with context proposals, which allow to incorporate the immediate context in the saliency computation.
2 code implementations • 8 Feb 2018 • Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov
In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.
2 code implementations • ICCV 2017 • Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez
We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
no code implementations • WS 2017 • Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation.
1 code implementation • WS 2016 • Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Joost Van de Weijer
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge.
no code implementations • 11 May 2016 • Marc Masana, Joost Van de Weijer, Andrew D. Bagdanov
Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image.