Search Results for author: Sk Miraj Ahmed

Found 7 papers, 2 papers with code

FLASH: Federated Learning Across Simultaneous Heterogeneities

no code implementations13 Feb 2024 Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data.

Federated Learning Multi-Armed Bandits

Plug-and-Play Transformer Modules for Test-Time Adaptation

no code implementations6 Jan 2024 Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model.

Test-time Adaptation Visual Prompt Tuning

MeTA: Multi-source Test Time Adaptation

no code implementations4 Jan 2024 Sk Miraj Ahmed, Fahim Faisal Niloy, Dripta S. Raychaudhuri, Samet Oymak, Amit K. Roy-Chowdhury

Test time adaptation is the process of adapting, in an unsupervised manner, a pre-trained source model to each incoming batch of the test data (i. e., without requiring a substantial portion of the test data to be available, as in traditional domain adaptation) and without access to the source data.

Test-time Adaptation

Effective Restoration of Source Knowledge in Continual Test Time Adaptation

no code implementations8 Nov 2023 Fahim Faisal Niloy, Sk Miraj Ahmed, Dripta S. Raychaudhuri, Samet Oymak, Amit K. Roy-Chowdhury

By restoring the knowledge from the source, it effectively corrects the negative consequences arising from the gradual deterioration of model parameters caused by ongoing shifts in the domain.

Change Detection Test-time Adaptation

SUMMIT: Source-Free Adaptation of Uni-Modal Models to Multi-Modal Targets

1 code implementation ICCV 2023 Cody Simons, Dripta S. Raychaudhuri, Sk Miraj Ahmed, Suya You, Konstantinos Karydis, Amit K. Roy-Chowdhury

In this work, we relax both of these assumptions by addressing the problem of adapting a set of models trained independently on uni-modal data to a target domain consisting of unlabeled multi-modal data, without having access to the original source dataset.

Autonomous Navigation Pseudo Label +2

Cross-Modal Knowledge Transfer Without Task-Relevant Source Data

no code implementations8 Sep 2022 Sk Miraj Ahmed, Suhas Lohit, Kuan-Chuan Peng, Michael J. Jones, Amit K. Roy-Chowdhury

In such cases, transferring knowledge from a neural network trained on a well-labeled large dataset in the source modality (RGB) to a neural network that works on a target modality (depth, infrared, etc.)

Autonomous Navigation Transfer Learning

Unsupervised Multi-source Domain Adaptation Without Access to Source Data

1 code implementation CVPR 2021 Sk Miraj Ahmed, Dripta S. Raychaudhuri, Sujoy Paul, Samet Oymak, Amit K. Roy-Chowdhury

A recent line of work addressed this problem and proposed an algorithm that transfers knowledge to the unlabeled target domain from a single source model without requiring access to the source data.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.