Domain-Invariant Representation Learning with Global and Local Consistency

29 Sep 2021  ·  Wenwen Qiang, Jiangmeng Li, Jie Hu, Bing Su, Changwen Zheng, Hui Xiong ·

In this paper, we give an analysis of the existing representation learning framework of unsupervised domain adaptation and show that the learned feature representations of the source domain samples are with discriminability, compressibility, and transferability. However, the learned feature representations of the target domain samples are only with compressibility and transferability. To address this challenge, we propose a novel framework and show from the information theory view that this framework can effectively improve the discriminability of the target domain sample representation. We also propose a method, namely domain-invariant representation learning with global and local consistency (RLGLC), under this framework. In particular, to maintain the global consistency, RLGLC proposes a new metric called asymmetrically-relaxed Wasserstein of Wasserstein distance (AR-WWD), AR-WWD can not only extract the transferability and compressibility of the feature representation of two domains, but also correlates well with human perception. To impose the local consistency structures, we propose a regularized contrastive loss, which can not only keep as much as possible predictive information contained in the feature representation of the target domain, but also alleviates the problem that semantically similar instances are undesirable pushed apart in training processing. Finally, we verify the effectiveness of RLGLC from both theoretical analyses on Bayes error rate and experimental validation on several benchmarks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here