no code implementations • 2 Jun 2023 • Nathan Beck, KrishnaTeja Killamsetty, Suraj Kothawade, Rishabh Iyer
Active Learning (AL) is a human-in-the-loop framework to interactively and adaptively label data instances, thereby enabling significant gains in model performance compared to random sampling.
no code implementations • 11 May 2023 • H S V N S Kowndinya Renduchintala, KrishnaTeja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy
A salient characteristic of pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size.
no code implementations • 30 Jan 2023 • KrishnaTeja Killamsetty, Alexandre V. Evfimievski, Tejaswini Pedapati, Kiran Kate, Lucian Popa, Rishabh Iyer
Training deep networks and tuning hyperparameters on large datasets is computationally intensive.
1 code implementation • 15 Mar 2022 • KrishnaTeja Killamsetty, Guttu Sai Abhishek, Aakriti, Alexandre V. Evfimievski, Lucian Popa, Ganesh Ramakrishnan, Rishabh Iyer
Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster.
no code implementations • CVPR 2022 • Rishabh Tiwari, KrishnaTeja Killamsetty, Rishabh Iyer, Pradeep Shenoy
To address this, replay-based CL approaches maintain and repeatedly retrain on a small buffer of data selected across encountered tasks.
1 code implementation • Findings (ACL) 2022 • Ayush Maheshwari, KrishnaTeja Killamsetty, Ganesh Ramakrishnan, Rishabh Iyer, Marina Danilevsky, Lucian Popa
These LFs, in turn, have been used to generate a large amount of additional noisy labeled data, in a paradigm that is now commonly referred to as data programming.
1 code implementation • NeurIPS 2021 • Suraj Kothawade, Nathan Beck, KrishnaTeja Killamsetty, Rishabh Iyer
Active learning has proven to be useful for minimizing labeling costs by selecting the most informative samples.
1 code implementation • NeurIPS 2021 • KrishnaTeja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer
In this work, we propose RETRIEVE, a coreset selection framework for efficient and robust semi-supervised learning.
3 code implementations • 27 Feb 2021 • KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, Rishabh Iyer
We show rigorous theoretical and convergence guarantees of the proposed algorithm and, through our extensive experiments on real-world datasets, show the effectiveness of our proposed framework.
1 code implementation • 19 Dec 2020 • KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Rishabh Iyer
Finally, we propose Glister-Active, an extension to batch active learning, and we empirically demonstrate the performance of Glister on a wide range of tasks including, (a) data selection to reduce training time, (b) robust learning under label noise and imbalance settings, and (c) batch-active learning with several deep and shallow models.
no code implementations • 13 Nov 2020 • KrishnaTeja Killamsetty, Changbin Li, Chen Zhao, Rishabh Iyer, Feng Chen
Model-Agnostic Meta-Learning (MAML), a popular gradient-based meta-learning framework, assumes that the contribution of each task or instance to the meta-learner is equal.
1 code implementation • Findings (ACL) 2021 • Ayush Maheshwari, Oishik Chatterjee, KrishnaTeja Killamsetty, Ganesh Ramakrishnan, Rishabh Iyer
The first contribution of this work is an introduction of a framework, \model which is a semi-supervised data programming paradigm that learns a \emph{joint model} that effectively uses the rules/labelling functions along with semi-supervised loss functions on the feature space.