Universal Domain Adaptation
18 papers with code • 4 benchmarks • 4 datasets
Most implemented papers
OVANet: One-vs-All Network for Universal Domain Adaptation
In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain.
Upcycling Models under Domain and Category Shift
We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA.
Universal Domain Adaptation through Self Supervision
While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori.
Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Divergence Optimization for Noisy Universal Domain Adaptation
Hence, we consider a new realistic setting called Noisy UniDA, in which classifiers are trained with noisy labeled data from the source domain and unlabeled data with an unknown class distribution from the target domain.
On Universal Black-Box Domain Adaptation
The great promise that UB$^2$DA makes, however, brings significant learning challenges, since domain adaptation can only rely on the predictions of unlabeled target data in a partially overlapped label space, by accessing the interface of source model.
Domain Consensus Clustering for Universal Domain Adaptation
To better exploit the intrinsic structure of the target domain, we propose Domain Consensus Clustering (DCC), which exploits the domain consensus knowledge to discover discriminative clusters on both common samples and private ones.
Distance-based Hyperspherical Classification for Multi-source Open-Set Domain Adaptation
Vision systems trained in closed-world scenarios fail when presented with new environmental conditions, new data distributions, and novel classes at deployment time.
VisDA-2021 Competition Universal Domain Adaptation to Improve Performance on Out-of-Distribution Data
Progress in machine learning is typically measured by training and testing a model on the same distribution of data, i. e., the same domain.
OneRing: A Simple Method for Source-free Open-partial Domain Adaptation
In this paper, we investigate Source-free Open-partial Domain Adaptation (SF-OPDA), which addresses the situation where there exist both domain and category shifts between source and target domains.