Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection

In order to robustly deploy object detectors across a wide range of scenarios, they should be adaptable to shifts in the input distribution without the need to constantly annotate new data. This has motivated research in Unsupervised Domain Adaptation (UDA) algorithms for detection. UDA methods learn to adapt from labeled source domains to unlabeled target domains, by inducing alignment between detector features from source and target domains. Yet, there is no consensus on what features to align and how to do the alignment. In our work, we propose a framework that generalizes the different components commonly used by UDA methods laying the ground for an in-depth analysis of the UDA design space. Specifically, we propose a novel UDA algorithm, ViSGA, a direct implementation of our framework, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level based on visual similarity before inducing group alignment via adversarial training. We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains. Finally, we examine the applicability of ViSGA to the setting where labeled data are gathered from different sources. Experiments show that not only our method outperforms previous single-source approaches on Sim2Real and Adverse Weather, but also generalizes well to the multi-source setting.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Unsupervised Domain Adaptation Cityscapes to Foggy Cityscapes ViSGA mAP@0.5 43.3 # 13
Unsupervised Domain Adaptation Kitti to Cityscapes ViSGA mAP@0.5 47.6 # 1
Multi-Source Unsupervised Domain Adaptation Sim10k and Kitti to Cityscapes ViSGA mAP@0.5 51.3 # 1
Unsupervised Domain Adaptation SIM10K to Cityscapes ViSGA mAP@0.5 49.3 # 10

Methods


No methods listed for this paper. Add relevant methods here