Search Results for author: Mahdi Alehdaghi

Found 7 papers, 3 papers with code

Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI

no code implementations16 Apr 2025 Mahdi Alehdaghi, Rajarshi Bhattacharya, Pourya Shamsolmoali, Rafael M. O. Cruz, Maguelonne Heritier, Eric Granger

Deep learning has provided considerable advancements for multimedia systems, yet the interpretability of deep models remains a challenge.

Unsupervised Part Discovery

From Cross-Modal to Mixed-Modal Visible-Infrared Re-Identification

no code implementations23 Jan 2025 Mahdi Alehdaghi, Rajarshi Bhattacharya, Pourya Shamsolmoali, Rafael M. O. Cruz, Eric Granger

While current VI-ReID methods focus on cross-modality matching, real-world applications often involve mixed galleries containing both V and I images, where state-of-the-art methods show significant performance limitations due to large domain shifts and low discrimination across mixed modalities.

Person Re-Identification

Bidirectional Multi-Step Domain Generalization for Visible-Infrared Person Re-Identification

no code implementations16 Mar 2024 Mahdi Alehdaghi, Pourya Shamsolmoali, Rafael M. O. Cruz, Eric Granger

In particular, our method minimizes the cross-modal gap by identifying and aligning shared prototypes that capture key discriminative features across modalities, then uses multiple bridging steps based on this information to enhance the feature representation.

Domain Generalization Person Re-Identification

Adaptive Generation of Privileged Intermediate Information for Visible-Infrared Person Re-Identification

no code implementations6 Jul 2023 Mahdi Alehdaghi, Arthur Josi, Pourya Shamsolmoali, Rafael M. O. Cruz, Eric Granger

In this paper, the Adaptive Generation of Privileged Intermediate Information training approach is introduced to adapt and generate a virtual domain that bridges discriminant information between the V and I modalities.

Person Re-Identification

Fusion for Visual-Infrared Person ReID in Real-World Surveillance Using Corrupted Multimodal Data

1 code implementation29 Apr 2023 Arthur Josi, Mahdi Alehdaghi, Rafael M. O. Cruz, Eric Granger

For realistic evaluation of multimodal (and cross-modal) V-I person ReID models, we propose new challenging corrupted datasets for scenarios where V and I cameras are co-located (CL) and not co-located (NCL).

Data Augmentation Person Re-Identification

Multimodal Data Augmentation for Visual-Infrared Person ReID with Corrupted Data

1 code implementation22 Nov 2022 Arthur Josi, Mahdi Alehdaghi, Rafael M. O. Cruz, Eric Granger

Several deep learning models have been proposed for visible-infrared (V-I) person ReID to recognize individuals from images captured using RGB and IR cameras.

Data Augmentation

Visible-Infrared Person Re-Identification Using Privileged Intermediate Information

1 code implementation19 Sep 2022 Mahdi Alehdaghi, Arthur Josi, Rafael M. O. Cruz, Eric Granger

% This paper introduces a novel approach for a creating intermediate virtual domain that acts as bridges between the two main domains (i. e., RGB and IR modalities) during training.

Domain Adaptation Person Re-Identification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.