Search Results for author: Divyansh Kaushik

Found 9 papers, 3 papers with code

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

2 code implementations ICLR 2020 Divyansh Kaushik, Eduard Hovy, Zachary C. Lipton

While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are less sensitive to this signal.

counterfactual Data Augmentation +2

Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

1 code implementation ICLR Workshop LLD 2019 Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton

Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.

Domain Adaptation

On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

1 code implementation ACL 2021 Divyansh Kaushik, Douwe Kiela, Zachary C. Lipton, Wen-tau Yih

In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.

Question Answering

Explaining The Efficacy of Counterfactually Augmented Data

no code implementations ICLR 2021 Divyansh Kaushik, Amrith Setlur, Eduard Hovy, Zachary C. Lipton

In attempts to produce ML models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable.

counterfactual Domain Generalization

Practical Benefits of Feature Feedback Under Distribution Shift

no code implementations14 Oct 2021 Anurag Katakkar, Clay H. Yoo, Weiqin Wang, Zachary C. Lipton, Divyansh Kaushik

In attempts to develop sample-efficient and interpretable algorithms, researcher have explored myriad mechanisms for collecting and exploiting feature feedback (or rationales) auxiliary annotations provided for training (but not test) instances that highlight salient evidence.

Natural Language Inference Sentiment Analysis

Resolving the Human Subjects Status of Machine Learning's Crowdworkers

no code implementations8 Jun 2022 Divyansh Kaushik, Zachary C. Lipton, Alex John London

We highlight two challenges posed by ML: the same set of workers can serve multiple roles and provide many sorts of information; and ML research tends to embrace a dynamic workflow, where research questions are seldom stated ex ante and data sharing opens the door for future studies to aim questions at different targets.

Ethics

Cannot find the paper you are looking for? You can Submit a new open access paper.