Search Results for author: Jonathan Niles-Weed

Found 27 papers, 4 papers with code

Supervised Quantile Normalization for Low Rank Matrix Factorization

no code implementations ICML 2020 Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, Jean-Philippe Vert

Low rank matrix factorization is a fundamental building block in machine learning, used for instance to summarize gene expression profile data or word-document counts.

Trajectory Inference with Smooth Schrödinger Bridges

1 code implementation1 Mar 2025 Wanli Hong, Yuliang Shi, Jonathan Niles-Weed

Motivated by applications in trajectory inference and particle tracking, we introduce Smooth Schr\"odinger Bridges.

Conditional simulation via entropic optimal transport: Toward non-parametric estimation of conditional Brenier maps

no code implementations11 Nov 2024 Ricardo Baptista, Aram-Alexandre Pooladian, Michael Brennan, Youssef Marzouk, Jonathan Niles-Weed

Conditional simulation is a fundamental task in statistical modeling: Generate samples from the conditionals given finitely many data points from a joint distribution.

Bayesian Inference

Learning large softmax mixtures with warm start EM

no code implementations16 Sep 2024 Xin Bing, Florentina Bunea, Jonathan Niles-Weed, Marten Wegkamp

We develop a new MoM parameter estimator based on latent moment estimation that is tailored to our model, and provide the first theoretical analysis for a MoM-based procedure in softmax mixtures.

Attribute parameter estimation

Plug-in estimation of Schrödinger bridges

1 code implementation21 Aug 2024 Aram-Alexandre Pooladian, Jonathan Niles-Weed

Instead, we show that the potentials obtained from solving the static entropic optimal transport problem between the source and target samples can be modified to yield a natural plug-in estimator of the time-dependent drift that defines the bridge between two measures.

Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias

no code implementations20 Aug 2024 Yifan Chen, Xiaoou Cheng, Jonathan Niles-Weed, Jonathan Weare

The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings.

Statistical optimal transport

no code implementations25 Jul 2024 Sinho Chewi, Jonathan Niles-Weed, Philippe Rigollet

We present an introduction to the field of statistical optimal transport, based on lectures given at \'Ecole d'\'Et\'e de Probabilit\'es de Saint-Flour XLIX.

Progressive Entropic Optimal Transport Solvers

no code implementations7 Jun 2024 Parnian Kassraie, Aram-Alexandre Pooladian, Michal Klein, James Thornton, Jonathan Niles-Weed, Marco Cuturi

Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets.

Learning Elastic Costs to Shape Monge Displacements

no code implementations20 Jun 2023 Michal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi

Given a source and a target probability measure supported on $\mathbb{R}^d$, the Monge problem asks to find the most efficient way to map one distribution to the other.

Minimax estimation of discontinuous optimal transport maps: The semi-discrete case

no code implementations26 Jan 2023 Aram-Alexandre Pooladian, Vincent Divol, Jonathan Niles-Weed

We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i. i. d.

Optimal transport map estimation in general function spaces

no code implementations7 Dec 2022 Vincent Divol, Jonathan Niles-Weed, Aram-Alexandre Pooladian

To ensure identifiability, we assume that $T = \nabla \varphi_0$ is the gradient of a convex function, in which case $T$ is known as an \emph{optimal transport map}.

Perturbation Analysis of Neural Collapse

no code implementations29 Oct 2022 Tom Tirer, Haoxiang Huang, Jonathan Niles-Weed

In this paper, we propose a richer model that can capture this phenomenon by forcing the features to stay in the vicinity of a predefined features matrix (e. g., intermediate features).

Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification

no code implementations18 Jun 2022 Natalie S. Frank, Jonathan Niles-Weed

Adversarial training is one of the most popular methods for training methods robust to adversarial attacks, however, it is not well-understood from a theoretical perspective.

Adversarial Robustness Binary Classification

An improved central limit theorem and fast convergence rates for entropic transportation costs

no code implementations19 Apr 2022 Eustasio del Barrio, Alberto Gonzalez-Sanz, Jean-Michel Loubes, Jonathan Niles-Weed

We prove a central limit theorem for the entropic transportation cost between subgaussian probability measures, centered at the population cost.

valid

Deep Probability Estimation

no code implementations21 Nov 2021 Sheng Liu, Aakash Kaku, Weicheng Zhu, Matan Leibovich, Sreyas Mohan, Boyang Yu, Haoxiang Huang, Laure Zanna, Narges Razavian, Jonathan Niles-Weed, Carlos Fernandez-Granda

Reliable probability estimation is of crucial importance in many real-world applications where there is inherent (aleatoric) uncertainty.

Autonomous Vehicles Binary Classification +2

Entropic estimation of optimal transport maps

no code implementations24 Sep 2021 Aram-Alexandre Pooladian, Jonathan Niles-Weed

We develop a computationally tractable method for estimating the optimal map between two distributions over $\mathbb{R}^d$ with rigorous finite-sample guarantees.

Plugin Estimation of Smooth Optimal Transport Maps

1 code implementation26 Jul 2021 Tudor Manole, Sivaraman Balakrishnan, Jonathan Niles-Weed, Larry Wasserman

Our work also provides new bounds on the risk of corresponding plugin estimators for the quadratic Wasserstein distance, and we show how this problem relates to that of estimating optimal transport maps using stability arguments for smooth and strongly convex Brenier potentials.

It was "all" for "nothing": sharp phase transitions for noiseless discrete channels

no code implementations24 Feb 2021 Jonathan Niles-Weed, Ilias Zadik

We establish a phase transition known as the "all-or-nothing" phenomenon for noiseless discrete channels.

Statistics Theory Information Theory Information Theory Probability Statistics Theory

Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond rank-one updates

no code implementations6 Feb 2021 De Huang, Jonathan Niles-Weed, Rachel Ward

We analyze Oja's algorithm for streaming $k$-PCA and prove that it achieves performance nearly matching that of an optimal offline algorithm.

The Discrepancy of Random Rectangular Matrices

no code implementations11 Jan 2021 Dylan J. Altschuler, Jonathan Niles-Weed

A recent approach to the Beck-Fiala conjecture, a fundamental problem in combinatorics, has been to understand when random integer matrices have constant discrepancy.

Probability Discrete Mathematics Combinatorics

Sinkhorn EM: An Expectation-Maximization algorithm based on entropic optimal transport

no code implementations30 Jun 2020 Gonzalo Mena, Amin Nejatbakhsh, Erdem Varol, Jonathan Niles-Weed

We study Sinkhorn EM (sEM), a variant of the expectation maximization (EM) algorithm for mixtures based on entropic optimal transport.

Early-Learning Regularization Prevents Memorization of Noisy Labels

2 code implementations NeurIPS 2020 Sheng Liu, Jonathan Niles-Weed, Narges Razavian, Carlos Fernandez-Granda

In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization.

General Classification Learning with noisy labels +1

Supervised Quantile Normalization for Low-rank Matrix Approximation

no code implementations8 Feb 2020 Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, Jean-Philippe Vert

Low rank matrix factorization is a fundamental building block in machine learning, used for instance to summarize gene expression profile data or word-document counts.

Massively scalable Sinkhorn distances via the Nyström method

no code implementations NeurIPS 2019 Jason Altschuler, Francis Bach, Alessandro Rudi, Jonathan Niles-Weed

The Sinkhorn "distance", a variant of the Wasserstein distance with entropic regularization, is an increasingly popular tool in machine learning and statistical inference.

Cannot find the paper you are looking for? You can Submit a new open access paper.