CyCADA: Cycle-Consistent Adversarial Domain Adaptation

Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.

PDF Abstract ICML 2018 PDF ICML 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels CyCADA pixel+feat mIoU 39.5 # 63
fwIOU 72.4 # 1
Per-pixel Accuracy 82.3% # 1
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels CyCADA pixel-only mIoU 34.8 # 66
Heart Segmentation Multi-Modality Whole Heart Segmentation Challenge 2017 CyCADA [[Hoffman et al.2018]] Average ASD 9.4 # 3
Average Dice 64.4 # 2
Domain Adaptation SVHN-to-MNIST CYCADA Accuracy 90.4 # 10
Unsupervised Image-To-Image Translation SVNH-to-MNIST CyCADA pixel+feat Classification Accuracy 90.4% # 1
Image-to-Image Translation SYNTHIA Fall-to-Winter CyCADA mIoU 63.3 # 1
Per-pixel Accuracy 92.1% # 1
fwIOU 85.7 # 1


No methods listed for this paper. Add relevant methods here