Search Results for author: Chirag Pabbaraju

Found 7 papers, 4 papers with code

Universal Approximation Using Well-Conditioned Normalizing Flows

no code implementations NeurIPS 2021 Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Universal Approximation for Log-concave Distributions using Well-conditioned Normalizing Flows

no code implementations ICML Workshop INNF 2021 Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski

As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?

Estimating Lipschitz constants of monotone deep equilibrium models

no code implementations ICLR 2021 Chirag Pabbaraju, Ezra Winston, J Zico Kolter

Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.

Generalization Bounds

Efficient semidefinite-programming-based inference for binary and multi-class MRFs

1 code implementation NeurIPS 2020 Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter

Probabilistic inference in pairwise Markov Random Fields (MRFs), i. e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models.

Learning Functions over Sets via Permutation Adversarial Networks

1 code implementation12 Jul 2019 Chirag Pabbaraju, Prateek Jain

In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.

Recommendation Systems

Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices

1 code implementation NeurIPS 2018 Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain

We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.

General Classification Multiple Instance Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.