Search Results for author: Chirag Pabbaraju

Found 14 papers, 4 papers with code

A Characterization of List Regression

no code implementations28 Sep 2024 Chirag Pabbaraju, Sahasrajit Sarmasarkar

There has been a recent interest in understanding and characterizing the sample complexity of list learning tasks, where the learning algorithm is allowed to make a short list of $k$ predictions, and we simply require one of the predictions to be correct.

Classification regression

Credit Attribution and Stable Compression

no code implementations22 Jun 2024 Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju

Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm).

PAC learning

Quantifying the Gain in Weak-to-Strong Generalization

no code implementations24 May 2024 Moses Charikar, Chirag Pabbaraju, Kirankumar Shiragur

In a recent and somewhat surprising work, Burns et al. (2023) empirically demonstrated that when strong models (like GPT-4) are finetuned using labels generated by weak supervisors (like GPT-2), the strong models outperform their weaker counterparts -- a phenomenon they term weak-to-strong generalization.

Testing with Non-identically Distributed Samples

no code implementations19 Nov 2023 Shivam Garg, Chirag Pabbaraju, Kirankumar Shiragur, Gregory Valiant

From a learning standpoint, even with $c=1$ samples from each distribution, $\Theta(k/\varepsilon^2)$ samples are necessary and sufficient to learn $\textbf{p}_{\mathrm{avg}}$ to within error $\varepsilon$ in TV distance.

Avg

Multiclass Learnability Does Not Imply Sample Compression

no code implementations12 Aug 2023 Chirag Pabbaraju

Every learnable binary hypothesis class (which must necessarily have finite VC dimension) admits a sample compression scheme of size only a finite function of its VC dimension, independent of the sample size.

A Characterization of List Learnability

no code implementations7 Nov 2022 Moses Charikar, Chirag Pabbaraju

In this work we consider list PAC learning where the goal is to output a list of $k$ predictions.

Learning Theory PAC learning

Pitfalls of Gaussians as a noise distribution in NCE

no code implementations1 Oct 2022 Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski

Noise Contrastive Estimation (NCE) is a popular approach for learning probability density functions parameterized up to a constant of proportionality.

Universal Approximation Using Well-Conditioned Normalizing Flows

no code implementations NeurIPS 2021 Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Universal Approximation for Log-concave Distributions using Well-conditioned Normalizing Flows

no code implementations ICML Workshop INNF 2021 Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Estimating Lipschitz constants of monotone deep equilibrium models

no code implementations ICLR 2021 Chirag Pabbaraju, Ezra Winston, J Zico Kolter

Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.

Generalization Bounds

Efficient semidefinite-programming-based inference for binary and multi-class MRFs

1 code implementation NeurIPS 2020 Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter

Probabilistic inference in pairwise Markov Random Fields (MRFs), i. e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models.

Learning Functions over Sets via Permutation Adversarial Networks

1 code implementation12 Jul 2019 Chirag Pabbaraju, Prateek Jain

In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.

Recommendation Systems

Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices

1 code implementation NeurIPS 2018 Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain

We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.

General Classification Multiple Instance Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.