Partial Optimal Transport with Applications on Positive-Unlabeled Learning

19 Feb 2020  ·  Laetitia Chapel, Mokhtar Z. Alaya, Gilles Gasso ·

Classical optimal transport problem seeks a transportation map that preserves the total mass betwenn two probability distributions, requiring their mass to be the same. This may be too restrictive in certain applications such as color or shape matching, since the distributions may have arbitrary masses and/or that only a fraction of the total mass has to be transported. Several algorithms have been devised for computing partial Wasserstein metrics that rely on an entropic regularization, but when it comes with exact solutions, almost no partial formulation of neither Wasserstein nor Gromov-Wasserstein are available yet. This precludes from working with distributions that do not lie in the same metric space or when invariance to rotation or translation is needed. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial Gromov-Wasserstein metrics is efficient in scenario where point clouds come from different domains or have different features.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here