Search Results for author: Anton Mallasto

Found 12 papers, 2 papers with code

Understanding deep neural networks through the lens of their non-linearity

no code implementations17 Oct 2023 Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski

The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.

Affine Transport for Sim-to-Real Domain Adaptation

no code implementations25 May 2021 Anton Mallasto, Karol Arndt, Markus Heinonen, Samuel Kaski, Ville Kyrki

In this paper, we present affine transport -- a variant of optimal transport, which models the mapping between state transition distributions between the source and target domains with an affine transformation.

Domain Adaptation OpenAI Gym +1

Estimating 2-Sinkhorn Divergence between Gaussian Processes from Finite-Dimensional Marginals

no code implementations5 Feb 2021 Anton Mallasto

\emph{Optimal Transport} (OT) has emerged as an important computational tool in machine learning and computer vision, providing a geometrical framework for studying probability measures.

Gaussian Processes

Bayesian Inference for Optimal Transport with Stochastic Cost

no code implementations19 Oct 2020 Anton Mallasto, Markus Heinonen, Samuel Kaski

In machine learning and computer vision, optimal transport has had significant success in learning generative models and defining metric distances between structured and stochastic data objects, that can be cast as probability measures.

Bayesian Inference

Entropy-Regularized $2$-Wasserstein Distance between Gaussian Measures

no code implementations5 Jun 2020 Anton Mallasto, Augusto Gerolin, Hà Quang Minh

As the geometries change by varying the regularization magnitude, we study the limiting cases of vanishing and infinite magnitudes, reconfirming well-known results on the limits of the Sinkhorn divergence.

Uncertainty Quantification

How Well Do WGANs Estimate the Wasserstein Metric?

no code implementations9 Oct 2019 Anton Mallasto, Guido Montúfar, Augusto Gerolin

Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution.

A Formalization of The Natural Gradient Method for General Similarity Measures

no code implementations24 Feb 2019 Anton Mallasto, Tom Dela Haije, Aasa Feragen

The method uses the Kullback-Leibler divergence, corresponding infinitesimally to the Fisher-Rao metric, which is pulled back to the parameter space of a family of probability distributions.

(q,p)-Wasserstein GANs: Comparing Ground Metrics for Wasserstein GANs

1 code implementation10 Feb 2019 Anton Mallasto, Jes Frellsen, Wouter Boomsma, Aasa Feragen

We contribute to the WGAN literature by introducing the family of $(q, p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure.

Wrapped Gaussian Process Regression on Riemannian Manifolds

no code implementations CVPR 2018 Anton Mallasto, Aasa Feragen

Gaussian process (GP) regression is a powerful tool in non-parametric regression providing uncertainty estimates.

Gaussian Processes regression

Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models

no code implementations23 May 2018 Anton Mallasto, Søren Hauberg, Aasa Feragen

Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space.

Uncertainty Quantification

Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes

no code implementations NeurIPS 2017 Anton Mallasto, Aasa Feragen

We prove uniqueness of the barycenter of a population of GPs, as well as convergence of the metric and the barycenter of their finite-dimensional counterparts.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.