1 code implementation • NeurIPS 2023 • Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas
When evaluating black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions.
1 code implementation • International Conference on Learning Representations 2023 • Anil Kag, Durmus Alp Emre Acar, Aditya Gangrade, Venkatesh Saligrama
We propose a novel knowledge distillation (KD) method to selectively instill teacher knowledge into a student model motivated by situations where the student's capacity is significantly smaller than that of the teachers.
1 code implementation • International Conference on Learning Representations 2023 • Anil Kag, Igor Fedorov, Aditya Gangrade, Paul Whatmough, Venkatesh Saligrama
Training a hybrid learner is difficult since we lack annotations of hard edge-examples.
no code implementations • 27 Sep 2022 • Tianrui Chen, Aditya Gangrade, Venkatesh Saligrama
The safe linear bandit problem (SLB) is an online approach to linear programming with unknown objective and unknown round-wise constraints, under stochastic bandit feedback of rewards and safety risks of actions.
no code implementations • 1 Apr 2022 • Tianrui Chen, Aditya Gangrade, Venkatesh Saligrama
We investigate a natural but surprisingly unstudied approach to the multi-armed bandit problem under safety risk constraints.
2 code implementations • 17 Nov 2021 • Robin Dunn, Aditya Gangrade, Larry Wasserman, Aaditya Ramdas
Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.
1 code implementation • NeurIPS 2021 • Aditya Gangrade, Anil Kag, Ashok Cutkosky, Venkatesh Saligrama
For example, this may model an adaptive decision to invoke more resources on this instance.
1 code implementation • 29 Sep 2021 • Anil Kag, Igor Fedorov, Aditya Gangrade, Paul Whatmough, Venkatesh Saligrama
The first network is a low-capacity network that can be deployed on an edge device, whereas the second is a high-capacity network deployed in the cloud.
no code implementations • NeurIPS 2020 • Aditya Gangrade, Bobak Nazer, Venkatesh Saligrama
We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes canoccur due to some external stimuli.
1 code implementation • 15 Oct 2020 • Aditya Gangrade, Anil Kag, Venkatesh Saligrama
We propose a novel method for selective classification (SC), a problem which allows a classifier to abstain from predicting some instances, thus trading off accuracy against coverage (the fraction of instances predicted).
2 code implementations • ICML 2020 • Ali Siahkamari, Aditya Gangrade, Brian Kulis, Venkatesh Saligrama
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
no code implementations • 14 Apr 2020 • Aditya Gangrade, Durmus Alp Emre Acar, Venkatesh Saligrama
We propose a new formulation for the BL problem via the concept of bracketings.
no code implementations • NeurIPS 2019 • Aditya Gangrade, Praveen Venkatesh, Bobak Nazer, Venkatesh Saligrama
Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$.
no code implementations • 29 Nov 2018 • Aditya Gangrade, Praveen Venkatesh, Bobak Nazer, Venkatesh Saligrama
Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$.
no code implementations • 28 Oct 2017 • Aditya Gangrade, Bobak Nazer, Venkatesh Saligrama
We study the trade-off between the sample sizes and the reliability of change detection, measured as a minimax risk, for the important cases of the Ising models and the Gaussian Markov random fields restricted to the models which have network structures with $p$ nodes and degree at most $d$, and obtain information-theoretic lower bounds for reliable change detection over these models.