Search Results for author: Sinho Chewi

Found 22 papers, 2 papers with code

Sampling from the Mean-Field Stationary Distribution

no code implementations12 Feb 2024 Yunbum Kook, Matthew S. Zhang, Sinho Chewi, Murat A. Erdogdu, Mufan Bill Li

We study the complexity of sampling from the stationary distribution of a mean-field SDE, or equivalently, the complexity of minimizing a functional over the space of probability measures which includes an interaction term.

Fast parallel sampling under isoperimetry

no code implementations17 Jan 2024 Nima Anari, Sinho Chewi, Thuy-Duong Vuong

For our main application, we show how to combine the TV distance guarantees of our algorithms with prior works and obtain RNC sampling-to-counting reductions for families of discrete distribution on the hypercube $\{\pm 1\}^n$ that are closed under exponential tilts and have bounded covariance.

Point Processes

Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space

no code implementations5 Dec 2023 Yiheng Jiang, Sinho Chewi, Aram-Alexandre Pooladian

We develop a theory of finite-dimensional polyhedral subsets over the Wasserstein space and optimization of functionals over them via first-order methods.

Variational Inference

Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space

no code implementations10 Apr 2023 Michael Diao, Krishnakumar Balasubramanian, Sinho Chewi, Adil Salim

Of key interest in statistics and machine learning is Gaussian VI, which approximates $\pi$ by minimizing the Kullback-Leibler (KL) divergence to $\pi$ over the space of Gaussians.

Variational Inference

Query lower bounds for log-concave sampling

no code implementations5 Apr 2023 Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan

Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one.

Faster high-accuracy log-concave sampling via algorithmic warm starts

no code implementations20 Feb 2023 Jason M. Altschuler, Sinho Chewi

Understanding the complexity of sampling from a strongly log-concave and log-smooth distribution $\pi$ on $\mathbb{R}^d$ to high accuracy is a fundamental problem, both from a practical and theoretical standpoint.

Vocal Bursts Intensity Prediction

Improved Discretization Analysis for Underdamped Langevin Monte Carlo

no code implementations16 Feb 2023 Matthew Zhang, Sinho Chewi, Mufan Bill Li, Krishnakumar Balasubramanian, Murat A. Erdogdu

As a byproduct, we also obtain the first KL divergence guarantees for ULMC without Hessian smoothness under strong log-concavity, which is based on a new result on the log-Sobolev constant along the underdamped Langevin diffusion.

Learning threshold neurons via the "edge of stability"

no code implementations14 Dec 2022 Kwangjun Ahn, Sébastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, Yi Zhang

For these models, we provably establish the edge of stability phenomenon and discover a sharp phase transition for the step size below which the neural network fails to learn "threshold-like" neurons (i. e., neurons with a non-zero first-layer bias).

Inductive Bias

Fisher information lower bounds for sampling

no code implementations5 Oct 2022 Sinho Chewi, Patrik Gerber, Holden Lee, Chen Lu

We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information (FI) bounds as a notion of approximate first-order stationarity in sampling.

Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions

no code implementations22 Sep 2022 Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, Anru R. Zhang

We provide theoretical convergence guarantees for score-based generative models (SGMs) such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of large-scale real-world generative models such as DALL$\cdot$E 2.

Denoising

Variational inference via Wasserstein gradient flows

1 code implementation31 May 2022 Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, Philippe Rigollet

Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a central computational approach to large-scale Bayesian inference.

Bayesian Inference Variational Inference

Improved analysis for a proximal algorithm for sampling

no code implementations13 Feb 2022 Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono

We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence guarantees under weaker assumptions than strong log-concavity: namely, our results hold for (1) weakly log-concave targets, and (2) targets satisfying isoperimetric assumptions which allow for non-log-concavity.

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo

no code implementations10 Feb 2022 Krishnakumar Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Matthew Zhang

For the task of sampling from a density $\pi \propto \exp(-V)$ on $\mathbb{R}^d$, where $V$ is possibly non-convex but $L$-gradient Lipschitz, we prove that averaged Langevin Monte Carlo outputs a sample with $\varepsilon$-relative Fisher information after $O( L^2 d^2/\varepsilon^2)$ iterations.

Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev

no code implementations23 Dec 2021 Sinho Chewi, Murat A. Erdogdu, Mufan Bill Li, Ruoqi Shen, Matthew Zhang

Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincar\'e inequality.

The entropic barrier is $n$-self-concordant

no code implementations21 Dec 2021 Sinho Chewi

For any convex body $K \subseteq \mathbb R^n$, S. Bubeck and R. Eldan introduced the entropic barrier on $K$ and showed that it is a $(1+o(1)) \, n$-self-concordant barrier.

Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent

no code implementations NeurIPS 2021 Jason M. Altschuler, Sinho Chewi, Patrik Gerber, Austin J. Stromme

We study first-order optimization algorithms for computing the barycenter of Gaussian distributions with respect to the optimal transport metric.

The query complexity of sampling from strongly log-concave distributions in one dimension

no code implementations29 May 2021 Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet

We establish the first tight lower bound of $\Omega(\log\log\kappa)$ on the query complexity of sampling from the class of strongly log-concave and log-smooth distributions with condition number $\kappa$ in one dimension.

Rejection sampling from shape-constrained distributions in sublinear time

no code implementations29 May 2021 Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet

We consider the task of generating exact samples from a target distribution, known up to normalization, over a finite alphabet.

Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm

no code implementations23 Dec 2020 Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, Philippe Rigollet

Conventional wisdom in the sampling literature, backed by a popular diffusion scaling limit, suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales as $O(d^{1/3})$, where $d$ is the dimension.

Efficient constrained sampling via the mirror-Langevin algorithm

no code implementations NeurIPS 2021 Kwangjun Ahn, Sinho Chewi

We propose a new discretization of the mirror-Langevin diffusion and give a crisp proof of its convergence.

SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence

1 code implementation NeurIPS 2020 Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet

Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport.

Exponential ergodicity of mirror-Langevin diffusions

no code implementations NeurIPS 2020 Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, Austin J. Stromme

Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020).

Cannot find the paper you are looking for? You can Submit a new open access paper.