DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation

CVPR 2021  ·  Seunghun Lee, Sunghyun Cho, Sunghoon Im ·

In this paper, we present DRANet, a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation. Unlike the existing domain adaptation methods that learn associated features sharing a domain, DRANet preserves the distinctiveness of each domain's characteristics. Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images. Then, it adapts the domain by incorporating the transferred style factor into the content factor along with learnable weights specified for each domain. This learning framework allows bi-/multi-directional domain adaptation with a single encoder-decoder network and aligns their domain shift. Additionally, we propose a content-adaptive domain transfer module that helps retain scene structure while transferring style. Extensive experiments show our model successfully separates content-style factors and synthesizes visually pleasing domain-transferred images. The proposed method demonstrates state-of-the-art performance on standard digit classification tasks as well as semantic segmentation tasks.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation MNIST-M-to-MNIST DRANet Accuracy 99.3 # 1
Domain Adaptation MNIST-to-MNIST-M DRANet Accuracy 98.7 # 1
Domain Adaptation MNIST-to-USPS DRANet Accuracy 98.2 # 4
Domain Adaptation USPS-to-MNIST DRANet Accuracy 97.8 # 7

Methods


No methods listed for this paper. Add relevant methods here