Unsupervised Domain Adaptation Via Pseudo-labels And Objectness Constraints

29 Sep 2021  ·  Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma Strubell, Jose Moura ·

Pseudo label self-training has emerged as a dominant approach to unsupervised domain adaptation (UDA) for semantic segmentation. Despite recent advances, this approach is susceptible to erroneous pseudo labels arising from confirmation bias that ultimately leads to sub-optimal segmentation. To mitigate the effect of noisy pseudo-labels, we propose regularising conventional self-training objectives with constraints that are derived from structure-preserving modalities, such as depth. Towards this end, we introduce a contrastive image-level objectness constraint that pulls the pixel representations of the same object instance closer while pushing those from different object categories apart. To identify pixels within an object, we subscribe to a notion of objectness derived from depth maps, that are robust to photometric variations, as well as superpixels, that are obtained via unsupervised clustering over the raw image space. Crucially, the objectness constraint is agnostic to the ground-truth semantic segmentation labels and, therefore, remains appropriate for unsupervised adaptation settings. In this paper, we show that our approach of leveraging multi-modal constraint improves top performing self-training methods in various UDA benchmarks for semantic segmentation. We make our code and data-splits available in the supplementary material.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here