no code implementations • 2 Jan 2024 • Scott Mahan, Caroline Moosmüller, Alexander Cloninger
Our approach is motivated by the observation that $L^2-$distances between optimal transport maps for distinct point clouds, originating from a shared fixed reference distribution, provide an approximation of the Wasserstein-2 distance between these point clouds, under certain assumptions.
no code implementations • 14 Nov 2023 • Keaton Hamm, Caroline Moosmüller, Bernhard Schmitzer, Matthew Thorpe
This paper aims at building the theoretical foundations for manifold learning algorithms in the space of absolutely continuous probability measures on a compact and convex subset of $\mathbb{R}^d$, metrized with the Wasserstein-2 distance $W$.
no code implementations • 14 Feb 2023 • Alexander Cloninger, Keaton Hamm, Varun Khurana, Caroline Moosmüller
We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
no code implementations • 25 Jan 2022 • Varun Khurana, Harish Kannan, Alexander Cloninger, Caroline Moosmüller
In this paper we study supervised learning tasks on the space of probability measures.
no code implementations • 20 Aug 2020 • Caroline Moosmüller, Alexander Cloninger
The transform is defined by computing the optimal transport of each distribution to a fixed reference distribution, and has a number of benefits when it comes to speed of computation and to determining classification boundaries.