You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 17 Nov 2021 • Shivam Garg, Santosh S. Vempala

We also show that FA can be far from optimal when $r < \mbox{rank}(Y)$.

no code implementations • 27 Oct 2021 • Xinyuan Cao, Weiyang Liu, Santosh S. Vempala

We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation.

1 code implementation • 7 Oct 2021 • Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala

We show that the AC provides a mechanism for learning to classify samples from well-separated classes.

no code implementations • 24 Sep 2021 • Ruilin Li, Molei Tao, Santosh S. Vempala, Andre Wibisono

The Mirror Langevin Diffusion (MLD) is a sampling analogue of mirror flow in continuous time, and it has nice convergence properties under log-Sobolev or Poincare inequalities relative to the Hessian metric, as shown by Chewi et al. (2020).

no code implementations • 3 Dec 2020 • Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala

We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.

no code implementations • 13 Jun 2019 • Santosh S. Vempala, Ruosong Wang, David P. Woodruff

We first resolve the randomized and deterministic communication complexity in the point-to-point model of communication, showing it is $\tilde{\Theta}(d^2L + sd)$ and $\tilde{\Theta}(sd^2L)$, respectively.

no code implementations • 7 May 2019 • Zongchen Chen, Santosh S. Vempala

We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to $e^{-f}$ where $f:\mathbb{R}^d \to \mathbb{R}$ is $\mu$-strongly convex and $L$-smooth (the condition number is $\kappa = L/\mu$).

no code implementations • NeurIPS 2019 • Santosh S. Vempala, Andre Wibisono

We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\mathbb{R}^n$.

no code implementations • 15 Dec 2018 • Yin Tat Lee, Zhao Song, Santosh S. Vempala

We apply this to the sampling problem to obtain a nearly linear implementation of HMC for a broad class of smooth, strongly logconcave densities, with the number of iterations (parallel depth) and gradient evaluations being $\mathit{polylogarithmic}$ in the dimension (rather than polynomial as in previous work).

no code implementations • 17 Oct 2017 • Yin Tat Lee, Santosh S. Vempala

A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds.

no code implementations • 26 Dec 2014 • Christos H. Papadimitriou, Santosh S. Vempala

We show that PJOIN can be implemented in Valiant's model.

no code implementations • 9 Dec 2014 • Santosh S. Vempala, Ying Xiao

We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.