no code implementations • 18 Oct 2023 • Yuanzhi Li, Raghu Meka, Rina Panigrahy, Kulin Shah
Deep networks typically learn concepts via classifiers, which involves setting up a model and training it via gradient descent to fit the concept-labeled data.
1 code implementation • NeurIPS 2023 • Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.
1 code implementation • 7 Jul 2021 • Bhavya Kalra, Kulin Shah, Naresh Manwani
In this paper, we propose deep architectures for learning instance specific abstain (reject option) binary classifiers.
1 code implementation • 19 Jun 2021 • Kulin Shah, Amit Deshpande, Navin Goyal
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using stochastic gradient descent with a sufficiently small learning rate and suitable initialization.
no code implementations • 31 May 2021 • Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya
Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class.
no code implementations • 1 Jan 2021 • Kulin Shah, Amit Deshpande, Navin Goyal
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD).
no code implementations • 14 Jun 2019 • Kulin Shah, Naresh Manwani
In this paper, we propose novel algorithms for active learning of reject option classifiers.
no code implementations • 22 Apr 2019 • Kulin Shah, P. S. Sastry, Naresh Manwani
In this paper, we propose a novel mixture of expert architecture for learning polyhedral classifiers.
no code implementations • 12 Feb 2018 • Kulin Shah, Naresh Manwani
We also show that the excess risk of loss $L_d$ is upper bounded by the excess risk of $L_{dr}$.