Search Results for author: Lenaic Chizat

Found 9 papers, 7 papers with code

Scaling Algorithms for Unbalanced Transport Problems

3 code implementations20 Jul 2016 Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, François-Xavier Vialard

This article introduces a new class of fast algorithms to approximate variational problems involving unbalanced optimal transport.

Optimization and Control 65K10

An Interpolating Distance between Optimal Transport and Fisher-Rao

1 code implementation22 Jun 2015 Lenaic Chizat, Bernhard Schmitzer, Gabriel Peyré, François-Xavier Vialard

This metric interpolates between the quadratic Wasserstein and the Fisher-Rao metrics and generalizes optimal transport to measures with different masses.

Analysis of PDEs

Unbalanced Optimal Transport: Geometry and Kantorovich Formulation

1 code implementation21 Aug 2015 Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, François-Xavier Vialard

These distances are defined by two equivalent alternative formulations: (i) a "fluid dynamic" formulation defining the distance as a geodesic distance over the space of measures (ii) a static "Kantorovich" formulation where the distance is the minimum of an optimization program over pairs of couplings describing the transfer (transport, creation and destruction) of mass between two measures.

Optimization and Control

On Lazy Training in Differentiable Programming

1 code implementation NeurIPS 2019 Lenaic Chizat, Edouard Oyallon, Francis Bach

In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying.

Sparse Optimization on Measures with Over-parameterized Gradient Descent

1 code implementation24 Jul 2019 Lenaic Chizat

Minimizing a convex function of a measure with a sparsity-inducing penalty is a typical problem arising, e. g., in sparse spikes deconvolution or two-layer neural networks training.

On the symmetries in the dynamics of wide two-layer neural networks

1 code implementation16 Nov 2022 Karl Hajjar, Lenaic Chizat

We consider the idealized setting of gradient flow on the population risk for infinitely wide two-layer ReLU neural networks (without bias), and study the effect of symmetries on the learned parameters and predictors.

On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport

no code implementations NeurIPS 2018 Lenaic Chizat, Francis Bach

Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure.

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence

no code implementations NeurIPS 2020 Lenaic Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré

We also propose and analyze an estimator based on Richardson extrapolation of the Sinkhorn divergence which enjoys improved statistical and computational efficiency guarantees, under a condition on the regularity of the approximation error, which is in particular satisfied for Gaussian densities.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.