Domain Adaptation
2231 papers with code • 58 benchmarks • 94 datasets
Domain Adaptation is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions.
Further readings:
( Image credit: Unsupervised Image-to-Image Translation Networks )
Libraries
Use these libraries to find Domain Adaptation models and implementationsSubtasks
- Unsupervised Domain Adaptation
- Domain Generalization
- Test-time Adaptation
- Source-Free Domain Adaptation
- Source-Free Domain Adaptation
- Universal Domain Adaptation
- Partial Domain Adaptation
- Online Domain Adaptation
- Continuously Indexed Domain Adaptation
- Prompt-driven Zero-shot Domain Adaptation
- Blended-target Domain Adaptation
- Wildly Unsupervised Domain Adaptation
- Video Domain Adapation
- Open-Set Multi-Target Domain Adaptation
Most implemented papers
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
Domain-Adversarial Training of Neural Networks
Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.
SuperPoint: Self-Supervised Interest Point Detection and Description
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision.
Generalized End-to-End Loss for Speaker Verification
In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function.
Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net
IBN-Net carefully integrates Instance Normalization (IN) and Batch Normalization (BN) as building blocks, and can be wrapped into many advanced deep networks to improve their performances.
Unsupervised Domain Adaptation by Backpropagation
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
Learning to Adapt Structured Output Space for Semantic Segmentation
In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation.
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.
Learning from Simulated and Unsupervised Images through Adversarial Training
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.