no code implementations • 5 Dec 2023 • Yiheng Jiang, Sinho Chewi, Aram-Alexandre Pooladian
We develop a theory of finite-dimensional polyhedral subsets over the Wasserstein space and optimization of functionals over them via first-order methods.
no code implementations • 20 Jun 2023 • Michal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi
Because of such difficulties, existing approaches rarely depart from the default choice of estimating such maps with the simple squared-Euclidean distance as the ground cost, $c(x, y)=\|x-y\|^2_2$.
no code implementations • 28 Apr 2023 • Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen
Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.
no code implementations • 23 Feb 2023 • Carles Domingo-Enrich, Aram-Alexandre Pooladian
In this short note, we complement these existing results in the literature by providing an explicit expansion of $\text{KL}(\rho_t^{\text{FR}}\|\pi)$ in terms of $e^{-t}$, where $(\rho_t^{\text{FR}})_{t\geq 0}$ is the FR gradient flow of the KL divergence.
no code implementations • 26 Jan 2023 • Aram-Alexandre Pooladian, Vincent Divol, Jonathan Niles-Weed
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i. i. d.
no code implementations • 7 Dec 2022 • Vincent Divol, Jonathan Niles-Weed, Aram-Alexandre Pooladian
To ensure identifiability, we assume that $T = \nabla \varphi_0$ is the gradient of a convex function, in which case $T$ is known as an \emph{optimal transport map}.
no code implementations • 24 Sep 2021 • Aram-Alexandre Pooladian, Jonathan Niles-Weed
We develop a computationally tractable method for estimating the optimal map between two distributions over $\mathbb{R}^d$ with rigorous finite-sample guarantees.
no code implementations • 10 Jun 2020 • Chris Finlay, Augusto Gerolin, Adam M. Oberman, Aram-Alexandre Pooladian
We approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropy-regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions.
1 code implementation • 4 Oct 2019 • Aram-Alexandre Pooladian, Chris Finlay, Adam M. Oberman
Successfully training deep neural networks often requires either batch normalization, appropriate weight initialization, both of which come with their own challenges.
2 code implementations • 5 Aug 2019 • Aram-Alexandre Pooladian, Chris Finlay, Tim Hoheisel, Adam Oberman
This includes, but is not limited to, $\ell_1, \ell_2$, and $\ell_\infty$ perturbations; the $\ell_0$ counting "norm" (i. e. true sparseness); and the total variation seminorm, which is a (non-$\ell_p$) convolutional dissimilarity measuring local pixel changes.
1 code implementation • ICCV 2019 • Chris Finlay, Aram-Alexandre Pooladian, Adam M. Oberman
Adversarial attacks formally correspond to an optimization problem: find a minimum norm image perturbation, constrained to cause misclassification.