256 papers with code • 13 benchmarks • 13 datasets
Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions.
Ranked #5 on Domain Adaptation on ImageCLEF-DA
This paper proposes an importance weighted adversarial nets-based method for unsupervised domain adaptation, specific for partial domain adaptation where the target domain has less number of classes compared to the source domain.
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.
Ranked #3 on Domain Adaptation on HMDBfull-to-UCF
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
Ranked #1 on Domain Adaptation on UCF-to-Olympic
In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner.
Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e. g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network.
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.