no code implementations • 11 Nov 2024 • Ricardo Baptista, Aram-Alexandre Pooladian, Michael Brennan, Youssef Marzouk, Jonathan Niles-Weed
Conditional simulation is a fundamental task in statistical modeling: Generate samples from the conditionals given finitely many data points from a joint distribution.
1 code implementation • 21 Aug 2024 • Aram-Alexandre Pooladian, Jonathan Niles-Weed
Instead, we show that the potentials obtained from solving the static entropic optimal transport problem between the source and target samples can be modified to yield a natural plug-in estimator of the time-dependent drift that defines the bridge between two measures.
no code implementations • 7 Jun 2024 • Parnian Kassraie, Aram-Alexandre Pooladian, Michal Klein, James Thornton, Jonathan Niles-Weed, Marco Cuturi
Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets.
1 code implementation • 1 Jun 2024 • Aram-Alexandre Pooladian, Carles Domingo-Enrich, Ricky T. Q. Chen, Brandon Amos
We investigate the optimal transport problem between probability measures when the underlying cost function is understood to satisfy a least action principle, also known as a Lagrangian cost.
1 code implementation • 5 Dec 2023 • Yiheng Jiang, Sinho Chewi, Aram-Alexandre Pooladian
We develop a theory of finite-dimensional polyhedral subsets over the Wasserstein space and optimization of functionals over them via first-order methods.
no code implementations • 20 Jun 2023 • Michal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi
Given a source and a target probability measure supported on $\mathbb{R}^d$, the Monge problem asks to find the most efficient way to map one distribution to the other.
no code implementations • 28 Apr 2023 • Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen
Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.
no code implementations • 23 Feb 2023 • Carles Domingo-Enrich, Aram-Alexandre Pooladian
In this short note, we complement these existing results in the literature by providing an explicit expansion of $\text{KL}(\rho_t^{\text{FR}}\|\pi)$ in terms of $e^{-t}$, where $(\rho_t^{\text{FR}})_{t\geq 0}$ is the FR gradient flow of the KL divergence.
no code implementations • 26 Jan 2023 • Aram-Alexandre Pooladian, Vincent Divol, Jonathan Niles-Weed
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i. i. d.
no code implementations • 7 Dec 2022 • Vincent Divol, Jonathan Niles-Weed, Aram-Alexandre Pooladian
To ensure identifiability, we assume that $T = \nabla \varphi_0$ is the gradient of a convex function, in which case $T$ is known as an \emph{optimal transport map}.
no code implementations • 24 Sep 2021 • Aram-Alexandre Pooladian, Jonathan Niles-Weed
We develop a computationally tractable method for estimating the optimal map between two distributions over $\mathbb{R}^d$ with rigorous finite-sample guarantees.
no code implementations • 10 Jun 2020 • Chris Finlay, Augusto Gerolin, Adam M. Oberman, Aram-Alexandre Pooladian
We approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropy-regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions.
1 code implementation • 4 Oct 2019 • Aram-Alexandre Pooladian, Chris Finlay, Adam M. Oberman
Successfully training deep neural networks often requires either batch normalization, appropriate weight initialization, both of which come with their own challenges.
2 code implementations • 5 Aug 2019 • Aram-Alexandre Pooladian, Chris Finlay, Tim Hoheisel, Adam Oberman
This includes, but is not limited to, $\ell_1, \ell_2$, and $\ell_\infty$ perturbations; the $\ell_0$ counting "norm" (i. e. true sparseness); and the total variation seminorm, which is a (non-$\ell_p$) convolutional dissimilarity measuring local pixel changes.
1 code implementation • ICCV 2019 • Chris Finlay, Aram-Alexandre Pooladian, Adam M. Oberman
Adversarial attacks formally correspond to an optimization problem: find a minimum norm image perturbation, constrained to cause misclassification.