Search Results for author: Kulin Shah

Found 9 papers, 3 papers with code

Simple Mechanisms for Representing, Indexing and Manipulating Concepts

no code implementations18 Oct 2023 Yuanzhi Li, Raghu Meka, Rina Panigrahy, Kulin Shah

Deep networks typically learn concepts via classifiers, which involves setting up a model and training it via gradient descent to fit the concept-labeled data.

Ambient Diffusion: Learning Clean Distributions from Corrupted Data

1 code implementation NeurIPS 2023 Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans

We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.

RISAN: Robust Instance Specific Abstention Network

1 code implementation7 Jul 2021 Bhavya Kalra, Kulin Shah, Naresh Manwani

In this paper, we propose deep architectures for learning instance specific abstain (reject option) binary classifiers.

Active Learning

Learning and Generalization in Overparameterized Normalizing Flows

1 code implementation19 Jun 2021 Kulin Shah, Amit Deshpande, Navin Goyal

In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using stochastic gradient descent with a sufficiently small learning rate and suitable initialization.

Density Estimation

Rawlsian Fair Adaptation of Deep Learning Classifiers

no code implementations31 May 2021 Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya

Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class.

Fairness

Learning and Generalization in Univariate Overparameterized Normalizing Flows

no code implementations1 Jan 2021 Kulin Shah, Amit Deshpande, Navin Goyal

In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD).

Density Estimation

Online Active Learning of Reject Option Classifiers

no code implementations14 Jun 2019 Kulin Shah, Naresh Manwani

In this paper, we propose novel algorithms for active learning of reject option classifiers.

Active Learning Binary Classification +1

PLUME: Polyhedral Learning Using Mixture of Experts

no code implementations22 Apr 2019 Kulin Shah, P. S. Sastry, Naresh Manwani

In this paper, we propose a novel mixture of expert architecture for learning polyhedral classifiers.

Generalization Bounds

Sparse Reject Option Classifier Using Successive Linear Programming

no code implementations12 Feb 2018 Kulin Shah, Naresh Manwani

We also show that the excess risk of loss $L_d$ is upper bounded by the excess risk of $L_{dr}$.

Cannot find the paper you are looking for? You can Submit a new open access paper.