Search Results for author: Santosh S. Vempala

Found 12 papers, 1 papers with code

Provable Lifelong Learning of Representations

no code implementations27 Oct 2021 Xinyuan Cao, Weiyang Liu, Santosh S. Vempala

We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation.

Assemblies of neurons can learn to classify well-separated distributions

1 code implementation7 Oct 2021 Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala

We show that the AC provides a mechanism for learning to classify samples from well-separated classes.

The Mirror Langevin Algorithm Converges with Vanishing Bias

no code implementations24 Sep 2021 Ruilin Li, Molei Tao, Santosh S. Vempala, Andre Wibisono

The Mirror Langevin Diffusion (MLD) is a sampling analogue of mirror flow in continuous time, and it has nice convergence properties under log-Sobolev or Poincare inequalities relative to the Hessian metric, as shown by Chewi et al. (2020).

Robustly Learning Mixtures of $k$ Arbitrary Gaussians

no code implementations3 Dec 2020 Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala

We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.

Tensor Decomposition

The Communication Complexity of Optimization

no code implementations13 Jun 2019 Santosh S. Vempala, Ruosong Wang, David P. Woodruff

We first resolve the randomized and deterministic communication complexity in the point-to-point model of communication, showing it is $\tilde{\Theta}(d^2L + sd)$ and $\tilde{\Theta}(sd^2L)$, respectively.

Distributed Optimization

Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions

no code implementations7 May 2019 Zongchen Chen, Santosh S. Vempala

We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to $e^{-f}$ where $f:\mathbb{R}^d \to \mathbb{R}$ is $\mu$-strongly convex and $L$-smooth (the condition number is $\kappa = L/\mu$).

Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

no code implementations NeurIPS 2019 Santosh S. Vempala, Andre Wibisono

We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\mathbb{R}^n$.

Algorithmic Theory of ODEs and Sampling from Well-conditioned Logconcave Densities

no code implementations15 Dec 2018 Yin Tat Lee, Zhao Song, Santosh S. Vempala

We apply this to the sampling problem to obtain a nearly linear implementation of HMC for a broad class of smooth, strongly logconcave densities, with the number of iterations (parallel depth) and gradient evaluations being $\mathit{polylogarithmic}$ in the dimension (rather than polynomial as in previous work).

Convergence Rate of Riemannian Hamiltonian Monte Carlo and Faster Polytope Volume Computation

no code implementations17 Oct 2017 Yin Tat Lee, Santosh S. Vempala

A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds.

Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity

no code implementations9 Dec 2014 Santosh S. Vempala, Ying Xiao

We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.