Search Results for author: Vihari Piratla

Found 15 papers, 10 papers with code

Estimation of Concept Explanations Should be Uncertainty Aware

1 code implementation13 Dec 2023 Vihari Piratla, Juyeon Heo, Katherine M. Collins, Sukriti Singh, Adrian Weller

We believe the improved quality of uncertainty-aware concept explanations make them a strong candidate for more reliable model interpretation.

Use Perturbations when Learning from Explanations

1 code implementation NeurIPS 2023 Juyeon Heo, Vihari Piratla, Matthew Wicker, Adrian Weller

Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons.

Robustness, Evaluation and Adaptation of Machine Learning Models in the Wild

no code implementations5 Mar 2023 Vihari Piratla

While we improve robustness over standard training methods for certain problem settings, performance of ML systems can still vary drastically with domain shifts.

Implicit Training of Energy Model for Structure Prediction

no code implementations21 Nov 2022 Shiv Shankar, Vihari Piratla

Most deep learning research has focused on developing new model and training procedures.

Human-in-the-Loop Mixup

1 code implementation2 Nov 2022 Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller

We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

1 code implementation NeurIPS 2021 Anshul Nasery, Soumyadeep Thakur, Vihari Piratla, Abir De, Sunita Sarawagi

In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions.

Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations

1 code implementation NeurIPS 2021 Vihari Piratla, Soumen Chakrabarty, Sunita Sarawagi

Our goal is to evaluate the accuracy of a black-box classification model, not as a single aggregate on a given test data distribution, but as a surface over a large number of combinations of attributes characterizing multiple test data distributions.

Attribute

An Analysis of Frame-skipping in Reinforcement Learning

no code implementations7 Feb 2021 Shivaram Kalyanakrishnan, Siddharth Aravindan, Vishwajeet Bagdawat, Varun Bhatt, Harshith Goka, Archit Gupta, Kalpesh Krishna, Vihari Piratla

In this paper, we investigate the role of the parameter $d$ in RL; $d$ is called the "frame-skip" parameter, since states in the Atari domain are images.

Decision Making reinforcement-learning +1

Untapped Potential of Data Augmentation: A Domain Generalization Viewpoint

no code implementations9 Jul 2020 Vihari Piratla, Shiv Shankar

It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.

Data Augmentation Domain Generalization

Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings

1 code implementation ACL 2019 Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti

Given a small corpus $\mathcal D_T$ pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of $\mathcal D_T$.

Cannot find the paper you are looking for? You can Submit a new open access paper.