no code implementations • 17 Oct 2023 • Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski
The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.
1 code implementation • 12 May 2023 • Oliver Struckmeier, Ievgen Redko, Anton Mallasto, Karol Arndt, Markus Heinonen, Ville Kyrki
Optimal transport (OT) is a powerful geometric tool used to compare and align probability measures following the least effort principle.
no code implementations • 25 May 2021 • Anton Mallasto, Karol Arndt, Markus Heinonen, Samuel Kaski, Ville Kyrki
In this paper, we present affine transport -- a variant of optimal transport, which models the mapping between state transition distributions between the source and target domains with an affine transformation.
no code implementations • 5 Feb 2021 • Anton Mallasto
\emph{Optimal Transport} (OT) has emerged as an important computational tool in machine learning and computer vision, providing a geometrical framework for studying probability measures.
no code implementations • 19 Oct 2020 • Anton Mallasto, Markus Heinonen, Samuel Kaski
In machine learning and computer vision, optimal transport has had significant success in learning generative models and defining metric distances between structured and stochastic data objects, that can be cast as probability measures.
no code implementations • 5 Jun 2020 • Anton Mallasto, Augusto Gerolin, Hà Quang Minh
As the geometries change by varying the regularization magnitude, we study the limiting cases of vanishing and infinite magnitudes, reconfirming well-known results on the limits of the Sinkhorn divergence.
no code implementations • 9 Oct 2019 • Anton Mallasto, Guido Montúfar, Augusto Gerolin
Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution.
no code implementations • 24 Feb 2019 • Anton Mallasto, Tom Dela Haije, Aasa Feragen
The method uses the Kullback-Leibler divergence, corresponding infinitesimally to the Fisher-Rao metric, which is pulled back to the parameter space of a family of probability distributions.
1 code implementation • 10 Feb 2019 • Anton Mallasto, Jes Frellsen, Wouter Boomsma, Aasa Feragen
We contribute to the WGAN literature by introducing the family of $(q, p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure.
no code implementations • CVPR 2018 • Anton Mallasto, Aasa Feragen
Gaussian process (GP) regression is a powerful tool in non-parametric regression providing uncertainty estimates.
no code implementations • 23 May 2018 • Anton Mallasto, Søren Hauberg, Aasa Feragen
Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space.
no code implementations • NeurIPS 2017 • Anton Mallasto, Aasa Feragen
We prove uniqueness of the barycenter of a population of GPs, as well as convergence of the metric and the barycenter of their finite-dimensional counterparts.