Search Results for author: Vihari Piratla

Found 10 papers, 7 papers with code

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

1 code implementation NeurIPS 2021 Anshul Nasery, Soumyadeep Thakur, Vihari Piratla, Abir De, Sunita Sarawagi

In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions.

Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations

1 code implementation NeurIPS 2021 Vihari Piratla, Soumen Chakrabarty, Sunita Sarawagi

Our goal is to evaluate the accuracy of a black-box classification model, not as a single aggregate on a given test data distribution, but as a surface over a large number of combinations of attributes characterizing multiple test data distributions.

An Analysis of Frame-skipping in Reinforcement Learning

no code implementations7 Feb 2021 Shivaram Kalyanakrishnan, Siddharth Aravindan, Vishwajeet Bagdawat, Varun Bhatt, Harshith Goka, Archit Gupta, Kalpesh Krishna, Vihari Piratla

In this paper, we investigate the role of the parameter $d$ in RL; $d$ is called the "frame-skip" parameter, since states in the Atari domain are images.

Decision Making Frame +1

Untapped Potential of Data Augmentation: A Domain Generalization Viewpoint

no code implementations9 Jul 2020 Vihari Piratla, Shiv Shankar

It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.

Data Augmentation Domain Generalization

Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings

1 code implementation ACL 2019 Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti

Given a small corpus $\mathcal D_T$ pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of $\mathcal D_T$.

Cannot find the paper you are looking for? You can Submit a new open access paper.