no code implementations • 2 Aug 2024 • Simone Caldarella, Massimiliano Mancini, Elisa Ricci, Rahaf Aljundi
Vision-Language Models (VLMs) combine visual and textual understanding, rendering them well-suited for diverse tasks like generating image captions and answering visual questions across various domains.
no code implementations • 23 Jul 2024 • Aristeidis Panos, Rahaf Aljundi, Daniel Olmeda Reino, Richard E Turner
Vision language models (VLMs) demonstrate impressive capabilities in visual question answering and image captioning, acting as a crucial link between visual and language models.
no code implementations • 19 Jun 2024 • Vaibhav Singh, Rahaf Aljundi, Eugene Belilovsky
Foundational vision-language models have shown impressive performance on various downstream tasks.
no code implementations • 14 Mar 2024 • Soroush Seifi, Daniel Olmeda Reino, Fabien Despinoy, Rahaf Aljundi
Semantic Segmentation is one of the most challenging vision tasks, usually requiring large amounts of training data with expensive pixel level annotations.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 15 Nov 2023 • Simone Caldarella, Elisa Ricci, Rahaf Aljundi
Object-based Novelty Detection (ND) aims to identify unknown objects that do not belong to classes seen during training by an object detection model.
no code implementations • 3 Oct 2023 • Soroush Seifi, Daniel Olmeda Reino, Nikolay Chumerin, Rahaf Aljundi
Our solution is simple and efficient and acts as a natural extension of the closed-set supervised contrastive representation learning.
1 code implementation • CVPR 2024 • Wenxuan Zhang, Paul Janson, Rahaf Aljundi, Mohamed Elhoseiny
Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0. 9% on a representative control set accuracy.
1 code implementation • 26 Mar 2023 • Nader Asadi, MohammadReza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point.
1 code implementation • 23 Mar 2023 • Tomas Vojir, Jan Sochman, Rahaf Aljundi, Jiri Matas
We propose a novel OOD method, called GROOD, that formulates the OOD detection as a Neyman-Pearson task with well calibrated scores and which achieves excellent performance, predicated by the use of a good generic representation.
no code implementations • ICCV 2023 • Aristeidis Panos, Yuriko Kobe, Daniel Olmeda Reino, Rahaf Aljundi, Richard E. Turner
In this work, we develop a baseline method, First Session Adaptation (FSA), that sheds light on the efficacy of existing CIL approaches and allows us to assess the relative performance contributions from head and body adaption.
no code implementations • 7 Nov 2022 • Rahaf Aljundi, Yash Patel, Milan Sulc, Daniel Olmeda, Nikolay Chumerin
In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss.
1 code implementation • 10 Oct 2022 • Paul Janson, Wenxuan Zhang, Rahaf Aljundi, Mohamed Elhoseiny
With the success of pretraining techniques in representation learning, a number of continual learning methods based on pretrained models have been proposed.
no code implementations • CVPR 2022 • MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Continual Learning research typically focuses on tackling the phenomenon of catastrophic forgetting in neural networks.
3 code implementations • ICLR 2022 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
1 code implementation • ICCV 2021 • Farzaneh Rezaeianaran, Rakshith Shetty, Rahaf Aljundi, Daniel Olmeda Reino, Shanshan Zhang, Bernt Schiele
In order to robustly deploy object detectors across a wide range of scenarios, they should be adaptable to shifts in the input distribution without the need to constantly annotate new data.
Multi-Source Unsupervised Domain Adaptation Object Detection +1
1 code implementation • 24 Jun 2021 • Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner
This work identifies the crucial link between the two problems and investigates the Novelty Detection problem under the Continual Learning setting.
3 code implementations • 11 Apr 2021 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
1 code implementation • ICCV 2021 • Tomas Vojir, Tomas Sipka, Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino, Jiri Matas
To that end, we propose a reconstruction module that can be used with many existing semantic segmentation networks, and that is trained to recognize and reconstruct road (drivable) surface from a small bottleneck.
no code implementations • 14 Oct 2020 • Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino
State-of-the-art machine learning models require access to significant amount of annotated data in order to achieve the desired level of performance.
2 code implementations • NeurIPS 2019 • Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, Lucas Page-Caccia
Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.
1 code implementation • 7 Oct 2019 • Rahaf Aljundi
A key component of such a never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from.
1 code implementation • 18 Sep 2019 • Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, Tinne Tuytelaars
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase.
1 code implementation • 11 Aug 2019 • Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars
Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.
5 code implementations • NeurIPS 2019 • Rahaf Aljundi, Min Lin, Baptiste Goujaud, Yoshua Bengio
To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal.
no code implementations • 26 Dec 2018 • Mohamed Elhoseiny, Francesca Babiloni, Rahaf Aljundi, Marcus Rohrbach, Manohar Paluri, Tinne Tuytelaars
So far life-long learning (LLL) has been studied in relatively small-scale and relatively artificial setups.
1 code implementation • CVPR 2019 • Rahaf Aljundi, Klaas Kelchtermans, Tinne Tuytelaars
A sequence of tasks is learned, one at a time, with all data of current task available but not of previous or future tasks.
1 code implementation • ICLR 2019 • Rahaf Aljundi, Marcus Rohrbach, Tinne Tuytelaars
In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition.
3 code implementations • ECCV 2018 • Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars
We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.
no code implementations • ICCV 2017 • Amal Rannen Triki, Rahaf Aljundi, Mathew B. Blaschko, Tinne Tuytelaars
This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks.
no code implementations • 28 Nov 2016 • Rahaf Aljundi, Punarjay Chakravarty, Tinne Tuytelaars
In this work, we aim at automatically labeling actors in a TV series.
2 code implementations • CVPR 2017 • Rahaf Aljundi, Punarjay Chakravarty, Tinne Tuytelaars
Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with finetuning or learning without-forgetting, can be selected.
no code implementations • 23 Mar 2016 • Rahaf Aljundi, Tinne Tuytelaars
To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective.
no code implementations • CVPR 2015 • Rahaf Aljundi, Remi Emonet, Damien Muselet, Marc Sebban
Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain.