|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
SOTA for Domain Adaptation on ImageCLEF-DA
In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
#4 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
#3 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.