1 code implementation • 26 Mar 2023 • Nader Asadi, MohammadReza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point.
1 code implementation • 23 Mar 2023 • Tomas Vojir, Jan Sochman, Rahaf Aljundi, Jiri Matas
In this work, we take a different approach and propose to leverage generic pre-trained representations.
no code implementations • 23 Mar 2023 • Aristeidis Panos, Yuriko Kobe, Daniel Olmeda Reino, Rahaf Aljundi, Richard E. Turner
In this work, we develop a baseline method, First Session Adaptation (FSA), that sheds light on the efficacy of existing CIL approaches and allows us to assess the relative performance contributions from head and body adaption.
no code implementations • 7 Nov 2022 • Rahaf Aljundi, Yash Patel, Milan Sulc, Daniel Olmeda, Nikolay Chumerin
In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss.
1 code implementation • 10 Oct 2022 • Paul Janson, Wenxuan Zhang, Rahaf Aljundi, Mohamed Elhoseiny
With the success of pretraining techniques in representation learning, a number of continual learning methods based on pretrained models have been proposed.
no code implementations • CVPR 2022 • MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Continual Learning research typically focuses on tackling the phenomenon of catastrophic forgetting in neural networks.
2 code implementations • ICLR 2022 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
1 code implementation • ICCV 2021 • Farzaneh Rezaeianaran, Rakshith Shetty, Rahaf Aljundi, Daniel Olmeda Reino, Shanshan Zhang, Bernt Schiele
In order to robustly deploy object detectors across a wide range of scenarios, they should be adaptable to shifts in the input distribution without the need to constantly annotate new data.
Multi-Source Unsupervised Domain Adaptation
Object Detection
+1
1 code implementation • 24 Jun 2021 • Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner
This work identifies the crucial link between the two problems and investigates the Novelty Detection problem under the Continual Learning setting.
3 code implementations • 11 Apr 2021 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
1 code implementation • ICCV 2021 • Tomas Vojir, Tomas Sipka, Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino, Jiri Matas
To that end, we propose a reconstruction module that can be used with many existing semantic segmentation networks, and that is trained to recognize and reconstruct road (drivable) surface from a small bottleneck.
no code implementations • 14 Oct 2020 • Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino
State-of-the-art machine learning models require access to significant amount of annotated data in order to achieve the desired level of performance.
1 code implementation • NeurIPS 2019 • Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, Lucas Page-Caccia
Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.
1 code implementation • 7 Oct 2019 • Rahaf Aljundi
A key component of such a never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from.
1 code implementation • 18 Sep 2019 • Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, Tinne Tuytelaars
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase.
1 code implementation • 11 Aug 2019 • Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars
Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.
3 code implementations • NeurIPS 2019 • Rahaf Aljundi, Min Lin, Baptiste Goujaud, Yoshua Bengio
To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal.
no code implementations • 26 Dec 2018 • Mohamed Elhoseiny, Francesca Babiloni, Rahaf Aljundi, Marcus Rohrbach, Manohar Paluri, Tinne Tuytelaars
So far life-long learning (LLL) has been studied in relatively small-scale and relatively artificial setups.
1 code implementation • CVPR 2019 • Rahaf Aljundi, Klaas Kelchtermans, Tinne Tuytelaars
A sequence of tasks is learned, one at a time, with all data of current task available but not of previous or future tasks.
1 code implementation • ICLR 2019 • Rahaf Aljundi, Marcus Rohrbach, Tinne Tuytelaars
In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition.
2 code implementations • ECCV 2018 • Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars
We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.
no code implementations • ICCV 2017 • Amal Rannen Triki, Rahaf Aljundi, Mathew B. Blaschko, Tinne Tuytelaars
This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks.
no code implementations • 28 Nov 2016 • Rahaf Aljundi, Punarjay Chakravarty, Tinne Tuytelaars
In this work, we aim at automatically labeling actors in a TV series.
1 code implementation • CVPR 2017 • Rahaf Aljundi, Punarjay Chakravarty, Tinne Tuytelaars
Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with finetuning or learning without-forgetting, can be selected.
no code implementations • 23 Mar 2016 • Rahaf Aljundi, Tinne Tuytelaars
To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective.
no code implementations • CVPR 2015 • Rahaf Aljundi, Remi Emonet, Damien Muselet, Marc Sebban
Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain.