Search Results for author: Mostofa Rafid Uddin

Found 5 papers, 0 papers with code

DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences

no code implementations CVPR 2025 Xingjian Li, Qiming Zhao, Neelesh Bisht, Mostofa Rafid Uddin, Jin Yu Kim, Bryan Zhang, Min Xu

In recent years, the interpretability of Deep Neural Networks (DNNs) has garnered significant attention, particularly due to their widespread deployment in critical domains like healthcare, finance, and autonomous systems.

Feature Importance

Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates

no code implementations5 Jul 2024 Shirley Kokane, Mostofa Rafid Uddin, Min Xu

Contrary to these methods, in this work, we propose a novel layer-wise learning scheme that adjusts learning parameters per layer as a function of the differences in the Jacobian/Attention/Hessian of the output activations w. r. t.

Knowledge Distillation Transfer Learning

DualContrast: Unsupervised Disentangling of Content and Transformations with Implicit Parameterization

no code implementations27 May 2024 Mostofa Rafid Uddin, Min Xu

We demonstrate that the existing self-supervised methods with data augmentation result in the poor disentanglement of content and transformations in real-world scenarios.

Data Augmentation Disentanglement

Harmony: A Generic Unsupervised Approach for Disentangling Semantic Content From Parameterized Transformations

no code implementations CVPR 2022 Mostofa Rafid Uddin, Gregory Howe, Xiangrui Zeng, Min Xu

Harmony leverages a simple cross-contrastive learning framework with multiple explicitly parameterized latent representations to disentangle content from transformations.

Contrastive Learning Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.