Search Results for author: Rahaf Aljundi

Found 34 papers, 18 papers with code

The Phantom Menace: Unmasking Privacy Leakages in Vision-Language Models

no code implementations2 Aug 2024 Simone Caldarella, Massimiliano Mancini, Elisa Ricci, Rahaf Aljundi

Vision-Language Models (VLMs) combine visual and textual understanding, rendering them well-suited for diverse tasks like generating image captions and answering visual questions across various domains.

Image Captioning

Imperfect Vision Encoders: Efficient and Robust Tuning for Vision-Language Models

no code implementations23 Jul 2024 Aristeidis Panos, Rahaf Aljundi, Daniel Olmeda Reino, Richard E Turner

Vision language models (VLMs) demonstrate impressive capabilities in visual question answering and image captioning, acting as a crucial link between visual and language models.

Computational Efficiency Image Captioning +2

Controlling Forgetting with Test-Time Data in Continual Learning

no code implementations19 Jun 2024 Vaibhav Singh, Rahaf Aljundi, Eugene Belilovsky

Foundational vision-language models have shown impressive performance on various downstream tasks.

Continual Learning

Annotation Free Semantic Segmentation with Vision Foundation Models

no code implementations14 Mar 2024 Soroush Seifi, Daniel Olmeda Reino, Fabien Despinoy, Rahaf Aljundi

Semantic Segmentation is one of the most challenging vision tasks, usually requiring large amounts of training data with expensive pixel level annotations.

Segmentation Semantic Segmentation +1

Incremental Object-Based Novelty Detection with Feedback Loop

no code implementations15 Nov 2023 Simone Caldarella, Elisa Ricci, Rahaf Aljundi

Object-based Novelty Detection (ND) aims to identify unknown objects that do not belong to classes seen during training by an object detection model.

Novelty Detection Object +2

OOD Aware Supervised Contrastive Learning

no code implementations3 Oct 2023 Soroush Seifi, Daniel Olmeda Reino, Nikolay Chumerin, Rahaf Aljundi

Our solution is simple and efficient and acts as a natural extension of the closed-set supervised contrastive representation learning.

Contrastive Learning Out of Distribution (OOD) Detection +1

Overcoming Generic Knowledge Loss with Selective Parameter Update

1 code implementation CVPR 2024 Wenxuan Zhang, Paul Janson, Rahaf Aljundi, Mohamed Elhoseiny

Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0. 9% on a representative control set accuracy.

Continual Learning General Knowledge

Calibrated Out-of-Distribution Detection with a Generic Representation

1 code implementation23 Mar 2023 Tomas Vojir, Jan Sochman, Rahaf Aljundi, Jiri Matas

We propose a novel OOD method, called GROOD, that formulates the OOD detection as a Neyman-Pearson task with well calibrated scores and which achieves excellent performance, predicated by the use of a good generic representation.

Out-of-Distribution Detection

First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning

no code implementations ICCV 2023 Aristeidis Panos, Yuriko Kobe, Daniel Olmeda Reino, Rahaf Aljundi, Richard E. Turner

In this work, we develop a baseline method, First Session Adaptation (FSA), that sheds light on the efficacy of existing CIL approaches and allows us to assess the relative performance contributions from head and body adaption.

class-incremental learning Class Incremental Learning +2

Contrastive Classification and Representation Learning with Probabilistic Interpretation

no code implementations7 Nov 2022 Rahaf Aljundi, Yash Patel, Milan Sulc, Daniel Olmeda, Nikolay Chumerin

In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss.

Classification Contrastive Learning +1

A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning

1 code implementation10 Oct 2022 Paul Janson, Wenxuan Zhang, Rahaf Aljundi, Mohamed Elhoseiny

With the success of pretraining techniques in representation learning, a number of continual learning methods based on pretrained models have been proposed.

Continual Learning Representation Learning

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

3 code implementations ICLR 2022 Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky

In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

class-incremental learning Class Incremental Learning

Continual Novelty Detection

1 code implementation24 Jun 2021 Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner

This work identifies the crucial link between the two problems and investigates the Novelty Detection problem under the Continual Learning setting.

Continual Learning Novelty Detection

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

3 code implementations11 Apr 2021 Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky

In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

Continual Learning Metric Learning

Road Anomaly Detection by Partial Image Reconstruction With Segmentation Coupling

1 code implementation ICCV 2021 Tomas Vojir, Tomas Sipka, Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino, Jiri Matas

To that end, we propose a reconstruction module that can be used with many existing semantic segmentation networks, and that is trained to recognize and reconstruct road (drivable) surface from a small bottleneck.

Anomaly Detection Autonomous Driving +3

Identifying Wrongly Predicted Samples: A Method for Active Learning

no code implementations14 Oct 2020 Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino

State-of-the-art machine learning models require access to significant amount of annotated data in order to achieve the desired level of performance.

Active Learning

Online Continual Learning with Maximal Interfered Retrieval

2 code implementations NeurIPS 2019 Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, Lucas Page-Caccia

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

class-incremental learning Class Incremental Learning +1

Continual Learning in Neural Networks

1 code implementation7 Oct 2019 Rahaf Aljundi

A key component of such a never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from.

Continual Learning Object Recognition

A continual learning survey: Defying forgetting in classification tasks

1 code implementation18 Sep 2019 Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, Tinne Tuytelaars

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase.

Classification Continual Learning +3

Online Continual Learning with Maximally Interfered Retrieval

1 code implementation11 Aug 2019 Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Continual Learning Retrieval

Exploring the Challenges towards Lifelong Fact Learning

no code implementations26 Dec 2018 Mohamed Elhoseiny, Francesca Babiloni, Rahaf Aljundi, Marcus Rohrbach, Manohar Paluri, Tinne Tuytelaars

So far life-long learning (LLL) has been studied in relatively small-scale and relatively artificial setups.

Task-Free Continual Learning

1 code implementation CVPR 2019 Rahaf Aljundi, Klaas Kelchtermans, Tinne Tuytelaars

A sequence of tasks is learned, one at a time, with all data of current task available but not of previous or future tasks.

Continual Learning Face Recognition +1

Selfless Sequential Learning

1 code implementation ICLR 2019 Rahaf Aljundi, Marcus Rohrbach, Tinne Tuytelaars

In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition.

Memory Aware Synapses: Learning what (not) to forget

3 code implementations ECCV 2018 Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars

We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.

Object Recognition

Encoder Based Lifelong Learning

no code implementations ICCV 2017 Amal Rannen Triki, Rahaf Aljundi, Mathew B. Blaschko, Tinne Tuytelaars

This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks.

Image Classification

Expert Gate: Lifelong Learning with a Network of Experts

2 code implementations CVPR 2017 Rahaf Aljundi, Punarjay Chakravarty, Tinne Tuytelaars

Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with finetuning or learning without-forgetting, can be selected.

Image Classification Video Prediction

Lightweight Unsupervised Domain Adaptation by Convolutional Filter Reconstruction

no code implementations23 Mar 2016 Rahaf Aljundi, Tinne Tuytelaars

To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective.

Unsupervised Domain Adaptation

Landmarks-Based Kernelized Subspace Alignment for Unsupervised Domain Adaptation

no code implementations CVPR 2015 Rahaf Aljundi, Remi Emonet, Damien Muselet, Marc Sebban

Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.