Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models.
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
Ranked #1 on Domain Adaptation on Office-Caltech-10
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.
Semantic segmentation is a key problem for many computer vision tasks.
Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions.
Ranked #5 on Domain Adaptation on ImageCLEF-DA