Search Results for author: Durga Sivasubramanian

Found 8 papers, 5 papers with code

Gradient Coreset for Federated Learning

1 code implementation13 Jan 2024 Durga Sivasubramanian, Lokesh Nagalapatti, Rishabh Iyer, Ganesh Ramakrishnan

We conduct experiments using four real-world datasets and show that GCFL is (1) more compute and energy efficient than FL, (2) robust to various kinds of noise in both the feature space and labels, (3) preserves the privacy of the validation dataset, and (4) introduces a small communication overhead but achieves significant gains in performance, particularly in cases when the clients' data is noisy.

Federated Learning

Using Early Readouts to Mediate Featural Bias in Distillation

no code implementations28 Oct 2023 Rishabh Tiwari, Durga Sivasubramanian, Anmol Mekala, Ganesh Ramakrishnan, Pradeep Shenoy

Deep networks tend to learn spurious feature-label correlations in real-world supervised learning tasks.

Fairness

Partitioned Gradient Matching-based Data Subset Selection for Compute-Efficient Robust ASR Training

no code implementations30 Oct 2022 Ashish Mittal, Durga Sivasubramanian, Rishabh Iyer, Preethi Jyothi, Ganesh Ramakrishnan

Training state-of-the-art ASR systems such as RNN-T often has a high associated financial and environmental cost.

Adaptive Mixing of Auxiliary Losses in Supervised Learning

1 code implementation7 Feb 2022 Durga Sivasubramanian, Ayush Maheshwari, Pradeep Shenoy, Prathosh AP, Ganesh Ramakrishnan

In several supervised learning scenarios, auxiliary losses are used in order to introduce additional information or constraints into the supervised learning objective.

Denoising Knowledge Distillation +1

Training Data Subset Selection for Regression with Controlled Generalization Error

1 code implementation23 Jun 2021 Durga Sivasubramanian, Rishabh Iyer, Ganesh Ramakrishnan, Abir De

First, we represent this problem with simplified constraints using the dual of the original training problem and show that the objective of this new representation is a monotone and alpha-submodular function, for a wide variety of modeling choices.

regression

Effective Evaluation of Deep Active Learning on Image Classification Tasks

no code implementations16 Jun 2021 Nathan Beck, Durga Sivasubramanian, Apurva Dani, Ganesh Ramakrishnan, Rishabh Iyer

Issues in the current literature include sometimes contradictory observations on the performance of different AL algorithms, unintended exclusion of important generalization approaches such as data augmentation and SGD for optimization, a lack of study of evaluation facets like the labeling efficiency of AL, and little or no clarity on the scenarios in which AL outperforms random sampling (RS).

Active Learning Benchmarking +3

GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training

3 code implementations27 Feb 2021 KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, Rishabh Iyer

We show rigorous theoretical and convergence guarantees of the proposed algorithm and, through our extensive experiments on real-world datasets, show the effectiveness of our proposed framework.

GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning

1 code implementation19 Dec 2020 KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Rishabh Iyer

Finally, we propose Glister-Active, an extension to batch active learning, and we empirically demonstrate the performance of Glister on a wide range of tasks including, (a) data selection to reduce training time, (b) robust learning under label noise and imbalance settings, and (c) batch-active learning with several deep and shallow models.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.