Open-Set Hypothesis Transfer with Semantic Consistency

1 Oct 2020  ·  Zeyu Feng, Chang Xu, DaCheng Tao ·

Unsupervised open-set domain adaptation (UODA) is a realistic problem where unlabeled target data contain unknown classes. Prior methods rely on the coexistence of both source and target domain data to perform domain alignment, which greatly limits their applications when source domain data are restricted due to privacy concerns. This paper addresses the challenging hypothesis transfer setting for UODA, where data from source domain are no longer available during adaptation on target domain. We introduce a method that focuses on the semantic consistency under transformation of target data, which is rarely appreciated by previous domain adaptation methods. Specifically, our model first discovers confident predictions and performs classification with pseudo-labels. Then we enforce the model to output consistent and definite predictions on semantically similar inputs. As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes. Experimental results show that our model outperforms state-of-the-art methods on UODA benchmarks.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here