Search Results for author: Aditya Gangrade

Found 15 papers, 8 papers with code

Counterfactually Comparing Abstaining Classifiers

1 code implementation NeurIPS 2023 Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas

When evaluating black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions.

Causal Inference counterfactual +1

Scaffolding a Student to Instill Knowledge

1 code implementation International Conference on Learning Representations 2023 Anil Kag, Durmus Alp Emre Acar, Aditya Gangrade, Venkatesh Saligrama

We propose a novel knowledge distillation (KD) method to selectively instill teacher knowledge into a student model motivated by situations where the student's capacity is significantly smaller than that of the teachers.

Knowledge Distillation

Doubly-Optimistic Play for Safe Linear Bandits

no code implementations27 Sep 2022 Tianrui Chen, Aditya Gangrade, Venkatesh Saligrama

The safe linear bandit problem (SLB) is an online approach to linear programming with unknown objective and unknown round-wise constraints, under stochastic bandit feedback of rewards and safety risks of actions.

Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk

no code implementations1 Apr 2022 Tianrui Chen, Aditya Gangrade, Venkatesh Saligrama

We investigate a natural but surprisingly unstudied approach to the multi-armed bandit problem under safety risk constraints.

Multi-Armed Bandits

Universal Inference Meets Random Projections: A Scalable Test for Log-concavity

2 code implementations17 Nov 2021 Robin Dunn, Aditya Gangrade, Larry Wasserman, Aaditya Ramdas

Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.

valid

Hybrid Cloud-Edge Networks for Efficient Inference

1 code implementation29 Sep 2021 Anil Kag, Igor Fedorov, Aditya Gangrade, Paul Whatmough, Venkatesh Saligrama

The first network is a low-capacity network that can be deployed on an edge device, whereas the second is a high-capacity network deployed in the cloud.

Limits on Testing Structural Changes in Ising Models

no code implementations NeurIPS 2020 Aditya Gangrade, Bobak Nazer, Venkatesh Saligrama

We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes canoccur due to some external stimuli.

Change Detection

Selective Classification via One-Sided Prediction

1 code implementation15 Oct 2020 Aditya Gangrade, Anil Kag, Venkatesh Saligrama

We propose a novel method for selective classification (SC), a problem which allows a classifier to abstain from predicting some instances, thus trading off accuracy against coverage (the fraction of instances predicted).

Classification General Classification +1

Piecewise Linear Regression via a Difference of Convex Functions

2 code implementations ICML 2020 Ali Siahkamari, Aditya Gangrade, Brian Kulis, Venkatesh Saligrama

We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.

regression

Budget Learning via Bracketing

no code implementations14 Apr 2020 Aditya Gangrade, Durmus Alp Emre Acar, Venkatesh Saligrama

We propose a new formulation for the BL problem via the concept of bracketings.

Efficient Near-Optimal Testing of Community Changes in Balanced Stochastic Block Models

no code implementations NeurIPS 2019 Aditya Gangrade, Praveen Venkatesh, Bobak Nazer, Venkatesh Saligrama

Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$.

Two-sample testing

Testing Changes in Communities for the Stochastic Block Model

no code implementations29 Nov 2018 Aditya Gangrade, Praveen Venkatesh, Bobak Nazer, Venkatesh Saligrama

Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$.

Stochastic Block Model Two-sample testing

Lower Bounds for Two-Sample Structural Change Detection in Ising and Gaussian Models

no code implementations28 Oct 2017 Aditya Gangrade, Bobak Nazer, Venkatesh Saligrama

We study the trade-off between the sample sizes and the reliability of change detection, measured as a minimax risk, for the important cases of the Ising models and the Gaussian Markov random fields restricted to the models which have network structures with $p$ nodes and degree at most $d$, and obtain information-theoretic lower bounds for reliable change detection over these models.

Change Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.