|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
SOTA for Domain Adaptation on ImageCLEF-DA
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.
In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
#4 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
#3 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.