no code implementations • 24 Nov 2023 • Adam Tauman Kalai, Santosh S. Vempala
For "arbitrary" facts whose veracity cannot be determined from the training data, we show that hallucinations must occur at a certain rate for language models that satisfy a statistical calibration condition appropriate for generative language models.
no code implementations • 24 Jul 2023 • Yunbum Kook, Santosh S. Vempala
The Dikin walk uses a local metric defined by a self-concordant barrier for linear constraints.
no code implementations • 6 Jun 2023 • Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala
Here we show that, in the same model, time can be captured naturally as precedence through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out.
no code implementations • 1 Mar 2023 • Khashayar Gatmiry, Jonathan Kelner, Santosh S. Vempala
We introduce a hybrid of the Lewis weights barrier and the standard logarithmic barrier and prove that the mixing rate for the corresponding RHMC is bounded by $\tilde O(m^{1/3}n^{4/3})$, improving on the previous best bound of $\tilde O(mn^{2/3})$ (based on the log barrier).
no code implementations • 23 Feb 2023 • He Jia, Pravesh K . Kothari, Santosh S. Vempala
We present a polynomial-time algorithm for robustly learning an unknown affine transformation of the standard hypercube from samples, an important and well-studied setting for independent component analysis (ICA).
no code implementations • 13 Oct 2022 • Yunbum Kook, Yin Tat Lee, Ruoqi Shen, Santosh S. Vempala
We show that for distributions in the form of $e^{-\alpha^{\top}x}$ on a polytope with $m$ constraints, the convergence rate of a family of commonly-used integrators is independent of $\left\Vert \alpha\right\Vert _{2}$ and the geometry of the polytope.
no code implementations • 22 Jun 2022 • Mehrdad Ghadiri, Mohit Singh, Santosh S. Vempala
We study approximation algorithms for the socially fair $(\ell_p, k)$-clustering problem with $m$ groups, whose special cases include the socially fair $k$-median ($p=1$) and socially fair $k$-means ($p=2$) problems.
no code implementations • 22 Apr 2022 • Khashayar Gatmiry, Santosh S. Vempala
We study the Riemannian Langevin Algorithm for the problem of sampling from a distribution with density $\nu$ with respect to the natural measure on a manifold with metric $g$.
1 code implementation • 3 Feb 2022 • Yunbum Kook, Yin Tat Lee, Ruoqi Shen, Santosh S. Vempala
We demonstrate for the first time that ill-conditioned, non-smooth, constrained distributions in very high dimension, upwards of 100, 000, can be sampled efficiently $\textit{in practice}$.
no code implementations • 17 Nov 2021 • Shivam Garg, Santosh S. Vempala
We also show that FA can be far from optimal when $r < \mbox{rank}(Y)$.
no code implementations • 27 Oct 2021 • Xinyuan Cao, Weiyang Liu, Santosh S. Vempala
We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation.
1 code implementation • 7 Oct 2021 • Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala
Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class.
no code implementations • 24 Sep 2021 • Ruilin Li, Molei Tao, Santosh S. Vempala, Andre Wibisono
The Mirror Langevin Diffusion (MLD) is a sampling analogue of mirror flow in continuous time, and it has nice convergence properties under log-Sobolev or Poincare inequalities relative to the Hessian metric, as shown by Chewi et al. (2020).
no code implementations • 3 Dec 2020 • Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala
We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.
no code implementations • 13 Jun 2019 • Santosh S. Vempala, Ruosong Wang, David P. Woodruff
We first resolve the randomized and deterministic communication complexity in the point-to-point model of communication, showing it is $\tilde{\Theta}(d^2L + sd)$ and $\tilde{\Theta}(sd^2L)$, respectively.
no code implementations • 7 May 2019 • Zongchen Chen, Santosh S. Vempala
We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to $e^{-f}$ where $f:\mathbb{R}^d \to \mathbb{R}$ is $\mu$-strongly convex and $L$-smooth (the condition number is $\kappa = L/\mu$).
no code implementations • NeurIPS 2019 • Santosh S. Vempala, Andre Wibisono
We also prove convergence guarantees in R\'enyi divergence of order $q > 1$ assuming the limit of ULA satisfies either the log-Sobolev or Poincar\'e inequality.
no code implementations • 15 Dec 2018 • Yin Tat Lee, Zhao Song, Santosh S. Vempala
We apply this to the sampling problem to obtain a nearly linear implementation of HMC for a broad class of smooth, strongly logconcave densities, with the number of iterations (parallel depth) and gradient evaluations being $\mathit{polylogarithmic}$ in the dimension (rather than polynomial as in previous work).
no code implementations • 17 Oct 2017 • Yin Tat Lee, Santosh S. Vempala
A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds.
no code implementations • 26 Dec 2014 • Christos H. Papadimitriou, Santosh S. Vempala
We show that PJOIN can be implemented in Valiant's model.
no code implementations • 9 Dec 2014 • Santosh S. Vempala, Ying Xiao
We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions.