Search Results for author: Divyansh Kaushik

Found 9 papers, 3 papers with code

Resolving the Human Subjects Status of Machine Learning's Crowdworkers

no code implementations8 Jun 2022 Divyansh Kaushik, Zachary C. Lipton, Alex John London

In recent years, machine learning (ML) has come to rely more heavily on crowdworkers, both for building bigger datasets and for addressing research questions requiring human interaction or judgment.

Natural Language Processing

Practical Benefits of Feature Feedback Under Distribution Shift

no code implementations14 Oct 2021 Anurag Katakkar, Weiqin Wang, Clay H. Yoo, Zachary C. Lipton, Divyansh Kaushik

In attempts to develop sample-efficient algorithms, researcher have explored myriad mechanisms for collecting and exploiting feature feedback, auxiliary annotations provided for training (but not test) instances that highlight salient evidence.

Natural Language Inference Sentiment Analysis

On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

1 code implementation ACL 2021 Divyansh Kaushik, Douwe Kiela, Zachary C. Lipton, Wen-tau Yih

In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.

Question Answering

Explaining The Efficacy of Counterfactually Augmented Data

no code implementations ICLR 2021 Divyansh Kaushik, Amrith Setlur, Eduard Hovy, Zachary C. Lipton

In attempts to produce ML models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable.

Domain Generalization

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

2 code implementations ICLR 2020 Divyansh Kaushik, Eduard Hovy, Zachary C. Lipton

While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are less sensitive to this signal.

Data Augmentation Natural Language Inference +2

Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

1 code implementation ICLR Workshop LLD 2019 Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton

Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.