Search Results for author: Chirag Pabbaraju

Found 11 papers, 4 papers with code

Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices

1 code implementation NeurIPS 2018 Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain

We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.

General Classification Multiple Instance Learning +2

Efficient semidefinite-programming-based inference for binary and multi-class MRFs

1 code implementation NeurIPS 2020 Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter

Probabilistic inference in pairwise Markov Random Fields (MRFs), i. e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models.

Learning Functions over Sets via Permutation Adversarial Networks

1 code implementation12 Jul 2019 Chirag Pabbaraju, Prateek Jain

In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.

Recommendation Systems

Estimating Lipschitz constants of monotone deep equilibrium models

no code implementations ICLR 2021 Chirag Pabbaraju, Ezra Winston, J Zico Kolter

Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.

Generalization Bounds

Universal Approximation for Log-concave Distributions using Well-conditioned Normalizing Flows

no code implementations ICML Workshop INNF 2021 Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Universal Approximation Using Well-Conditioned Normalizing Flows

no code implementations NeurIPS 2021 Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Pitfalls of Gaussians as a noise distribution in NCE

no code implementations1 Oct 2022 Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski

Noise Contrastive Estimation (NCE) is a popular approach for learning probability density functions parameterized up to a constant of proportionality.

A Characterization of List Learnability

no code implementations7 Nov 2022 Moses Charikar, Chirag Pabbaraju

In this work we consider list PAC learning where the goal is to output a list of $k$ predictions.

Learning Theory PAC learning

Multiclass Learnability Does Not Imply Sample Compression

no code implementations12 Aug 2023 Chirag Pabbaraju

Every learnable binary hypothesis class (which must necessarily have finite VC dimension) admits a sample compression scheme of size only a finite function of its VC dimension, independent of the sample size.

Testing with Non-identically Distributed Samples

no code implementations19 Nov 2023 Shivam Garg, Chirag Pabbaraju, Kirankumar Shiragur, Gregory Valiant

From a learning standpoint, even with $c=1$ samples from each distribution, $\Theta(k/\varepsilon^2)$ samples are necessary and sufficient to learn $\textbf{p}_{\mathrm{avg}}$ to within error $\varepsilon$ in TV distance.

Avg

Cannot find the paper you are looking for? You can Submit a new open access paper.