Unsupervised Domain Adaptation

727 papers with code • 36 benchmarks • 31 datasets

Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.

Source: Domain-Specific Batch Normalization for Unsupervised Domain Adaptation

Libraries

Use these libraries to find Unsupervised Domain Adaptation models and implementations

Most implemented papers

Rescaling Egocentric Vision

epic-kitchens/epic-kitchens-100-annotations 23 Jun 2020

This paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS.

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

tensorflow/models CVPR 2017

Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.

Domain Separation Networks

tensorflow/models NeurIPS 2016

However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain.

Temporal Attentive Alignment for Large-Scale Video Domain Adaptation

cmhungsteve/TA3N ICCV 2019

Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e. g. 7. 9% accuracy gain over "Source only" from 73. 9% to 81. 8% on "HMDB --> UCF", and 10. 3% gain on "Kinetics --> Gameplay").

AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation

google-research/adamatch ICLR 2022

We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one.

GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval

ukplab/gpl NAACL 2022

This limits the usage of dense retrieval approaches to only a few domains with large training datasets.

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

mlfoundations/model-soups 10 Mar 2022

The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder.

Correlation Alignment for Unsupervised Domain Adaptation

VisionLearningGroup/CORAL 6 Dec 2016

In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.

A DIRT-T Approach to Unsupervised Domain Adaptation

RuiShu/dirt-t ICLR 2018

Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.

DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation

bbdamodaran/deepJDOT ECCV 2018

In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e. g. same classes), but also different latent data structures (e. g. different acquisition conditions).