Search Results for author: KrishnaTeja Killamsetty

Found 12 papers, 7 papers with code

Beyond Active Learning: Leveraging the Full Potential of Human Interaction via Auto-Labeling, Human Correction, and Human Verification

no code implementations2 Jun 2023 Nathan Beck, KrishnaTeja Killamsetty, Suraj Kothawade, Rishabh Iyer

Active Learning (AL) is a human-in-the-loop framework to interactively and adaptively label data instances, thereby enabling significant gains in model performance compared to random sampling.

Active Learning

INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Language Models

no code implementations11 May 2023 H S V N S Kowndinya Renduchintala, KrishnaTeja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy

A salient characteristic of pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size.

AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning

1 code implementation15 Mar 2022 KrishnaTeja Killamsetty, Guttu Sai Abhishek, Aakriti, Alexandre V. Evfimievski, Lucian Popa, Ganesh Ramakrishnan, Rishabh Iyer

Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster.

GCR: Gradient Coreset Based Replay Buffer Selection For Continual Learning

no code implementations CVPR 2022 Rishabh Tiwari, KrishnaTeja Killamsetty, Rishabh Iyer, Pradeep Shenoy

To address this, replay-based CL approaches maintain and repeatedly retrain on a small buffer of data selected across encountered tasks.

Continual Learning

RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning

1 code implementation NeurIPS 2021 KrishnaTeja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer

In this work, we propose RETRIEVE, a coreset selection framework for efficient and robust semi-supervised learning.

GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training

3 code implementations27 Feb 2021 KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, Rishabh Iyer

We show rigorous theoretical and convergence guarantees of the proposed algorithm and, through our extensive experiments on real-world datasets, show the effectiveness of our proposed framework.

GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning

1 code implementation19 Dec 2020 KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Rishabh Iyer

Finally, we propose Glister-Active, an extension to batch active learning, and we empirically demonstrate the performance of Glister on a wide range of tasks including, (a) data selection to reduce training time, (b) robust learning under label noise and imbalance settings, and (c) batch-active learning with several deep and shallow models.

Active Learning

A Nested Bi-level Optimization Framework for Robust Few Shot Learning

no code implementations13 Nov 2020 KrishnaTeja Killamsetty, Changbin Li, Chen Zhao, Rishabh Iyer, Feng Chen

Model-Agnostic Meta-Learning (MAML), a popular gradient-based meta-learning framework, assumes that the contribution of each task or instance to the meta-learner is equal.

Few-Shot Learning

Semi-Supervised Data Programming with Subset Selection

1 code implementation Findings (ACL) 2021 Ayush Maheshwari, Oishik Chatterjee, KrishnaTeja Killamsetty, Ganesh Ramakrishnan, Rishabh Iyer

The first contribution of this work is an introduction of a framework, \model which is a semi-supervised data programming paradigm that learns a \emph{joint model} that effectively uses the rules/labelling functions along with semi-supervised loss functions on the feature space.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.