no code implementations • 28 Sep 2024 • Chirag Pabbaraju, Sahasrajit Sarmasarkar
There has been a recent interest in understanding and characterizing the sample complexity of list learning tasks, where the learning algorithm is allowed to make a short list of $k$ predictions, and we simply require one of the predictions to be correct.
no code implementations • 22 Jun 2024 • Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju
Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm).
no code implementations • 24 May 2024 • Moses Charikar, Chirag Pabbaraju, Kirankumar Shiragur
In a recent and somewhat surprising work, Burns et al. (2023) empirically demonstrated that when strong models (like GPT-4) are finetuned using labels generated by weak supervisors (like GPT-2), the strong models outperform their weaker counterparts -- a phenomenon they term weak-to-strong generalization.
no code implementations • 19 Nov 2023 • Shivam Garg, Chirag Pabbaraju, Kirankumar Shiragur, Gregory Valiant
From a learning standpoint, even with $c=1$ samples from each distribution, $\Theta(k/\varepsilon^2)$ samples are necessary and sufficient to learn $\textbf{p}_{\mathrm{avg}}$ to within error $\varepsilon$ in TV distance.
no code implementations • 12 Aug 2023 • Chirag Pabbaraju
Every learnable binary hypothesis class (which must necessarily have finite VC dimension) admits a sample compression scheme of size only a finite function of its VC dimension, independent of the sample size.
no code implementations • 7 Nov 2022 • Moses Charikar, Chirag Pabbaraju
In this work we consider list PAC learning where the goal is to output a list of $k$ predictions.
no code implementations • 1 Oct 2022 • Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski
Noise Contrastive Estimation (NCE) is a popular approach for learning probability density functions parameterized up to a constant of proportionality.
no code implementations • NeurIPS 2021 • Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?
no code implementations • ICML Workshop INNF 2021 • Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski
As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?
no code implementations • ICLR 2021 • Chirag Pabbaraju, Ezra Winston, J Zico Kolter
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.
1 code implementation • NeurIPS 2020 • Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter
Probabilistic inference in pairwise Markov Random Fields (MRFs), i. e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models.
1 code implementation • Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST'19) 2019 • Shishir G. Patil, Don Dennis, Chirag Pabbaraju, Nadeem Shaheer, Harsha Vardhan Simhadri, Vivek Seshadri, Manik Varma, Prateek Jain
Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster.
Ranked #1 on Gesture Recognition on GesturePod
1 code implementation • 12 Jul 2019 • Chirag Pabbaraju, Prateek Jain
In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.
1 code implementation • NeurIPS 2018 • Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain
We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.