Correlation Alignment by Riemannian Metric for Domain Adaptation

23 May 2017  ·  Pietro Morerio, Vittorio Murino ·

Domain adaptation techniques address the problem of reducing the sensitivity of machine learning methods to the so-called domain shift, namely the difference between source (training) and target (test) data distributions. In particular, unsupervised domain adaptation assumes no labels are available in the target domain. To this end, aligning second order statistics (covariances) of target and source domains have proven to be an effective approach ti fill the gap between the domains. However, covariance matrices do not form a subspace of the Euclidean space, but live in a Riemannian manifold with non-positive curvature, making the usual Euclidean metric suboptimal to measure distances. In this paper, we extend the idea of training a neural network with a constraint on the covariances of the hidden layer features, by rigorously accounting for the curved structure of the manifold of symmetric positive definite matrices. The resulting loss function exploits a theoretically sound geodesic distance on such manifold. Results show indeed the suboptimal nature of the Euclidean distance. This makes us able to perform better than previous approaches on the standard Office dataset, a benchmark for domain adaptation techniques.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here