1 code implementation • 2 Jan 2024 • Serban Stan, Mohammad Rostami
Semantic segmentation models trained on annotated data fail to generalize well when the input data distribution changes over extended time period, leading to requiring re-training to maintain performance.
no code implementations • 29 Jan 2023 • Serban Stan, Mohammad Rostami
Our algorithm is based on updating the model such that the internal representation of data remains unbiased despite distributional shifts in the input space.
no code implementations • 2 Nov 2022 • Serban Stan, Mohammad Rostami
We rely on an approximation of the source latent features at adaptation time, and create a joint source/target embedding space by minimizing a distributional distance metric based on optimal transport.
1 code implementation • 23 Jun 2021 • Serban Stan, Mohammad Rostami
Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains.
Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation
1 code implementation • 2 Jan 2021 • Serban Stan, Mohammad Rostami
In this work, we develop an algorithm for UDA where the source domain data is inaccessible during target adaptation.
1 code implementation • 26 Sep 2020 • Serban Stan, Mohammad Rostami
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
no code implementations • ICML 2017 • Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi
As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e. g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set.