1 code implementation • ICLR 2022 • Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi
We consider the problem of training a classification model with group annotated training data.
1 code implementation • NeurIPS 2021 • Anshul Nasery, Soumyadeep Thakur, Vihari Piratla, Abir De, Sunita Sarawagi
In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions.
1 code implementation • NeurIPS 2021 • Vihari Piratla, Soumen Chakrabarty, Sunita Sarawagi
Our goal is to evaluate the accuracy of a black-box classification model, not as a single aggregate on a given test data distribution, but as a surface over a large number of combinations of attributes characterizing multiple test data distributions.
no code implementations • 7 Feb 2021 • Shivaram Kalyanakrishnan, Siddharth Aravindan, Vishwajeet Bagdawat, Varun Bhatt, Harshith Goka, Archit Gupta, Kalpesh Krishna, Vihari Piratla
In this paper, we investigate the role of the parameter $d$ in RL; $d$ is called the "frame-skip" parameter, since states in the Atari domain are images.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Sahil Shah, Vihari Piratla, Soumen Chakrabarti, Sunita Sarawagi
Each client uses an unsupervised, corpus-based sketch to register to the service.
no code implementations • 9 Jul 2020 • Vihari Piratla, Shiv Shankar
It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.
1 code implementation • ICML 2020 • Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi
The domain specific components are discarded after training and only the common component is retained.
Ranked #1 on
Domain Generalization
on LipitK
1 code implementation • IJCNLP 2019 • Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla
We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC).
1 code implementation • ACL 2019 • Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti
Given a small corpus $\mathcal D_T$ pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of $\mathcal D_T$.
1 code implementation • ICLR 2018 • Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, Sunita Sarawagi
We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains.
Ranked #58 on
Domain Generalization
on PACS