no code implementations • NeurIPS 2021 • Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?
no code implementations • ICML Workshop INNF 2021 • Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski
As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows?
no code implementations • ICLR 2021 • Chirag Pabbaraju, Ezra Winston, J Zico Kolter
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.
1 code implementation • NeurIPS 2020 • Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter
Probabilistic inference in pairwise Markov Random Fields (MRFs), i. e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models.
1 code implementation • Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST'19) 2019 • Shishir G. Patil, Don Dennis, Chirag Pabbaraju, Nadeem Shaheer, Harsha Vardhan Simhadri, Vivek Seshadri, Manik Varma, Prateek Jain
Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster.
Ranked #1 on
Gesture Recognition
on GesturePod
1 code implementation • 12 Jul 2019 • Chirag Pabbaraju, Prateek Jain
In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.
1 code implementation • NeurIPS 2018 • Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain
We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.