Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation

In this work, we connect two distinct concepts for unsupervised domain adaptation: feature distribution alignment between domains by utilizing the task-specific decision boundary and the Wasserstein metric. Our proposed sliced Wasserstein discrepancy (SWD) is designed to capture the natural notion of dissimilarity between the outputs of task-specific classifiers. It provides a geometrically meaningful guidance to detect target samples that are far from the support of the source and enables efficient distribution alignment in an end-to-end trainable fashion. In the experiments, we validate the effectiveness and genericness of our method on digit and sign recognition, image classification, semantic segmentation, and object detection.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels SWD mIoU 44.5 # 59
Image-to-Image Translation SYNTHIA-to-Cityscapes SWD mIoU (13 classes) 48.1 # 19
Domain Adaptation VisDA2017 SWD Accuracy 76.4 # 19

Methods


No methods listed for this paper. Add relevant methods here