Search Results for author: Hannah Chen

Found 4 papers, 3 papers with code

Addressing Both Statistical and Causal Gender Fairness in NLP Models

1 code implementation30 Mar 2024 Hannah Chen, Yangfeng Ji, David Evans

Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics.

counterfactual Data Augmentation +1

Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

1 code implementation20 Oct 2022 Hannah Chen, Yangfeng Ji, David Evans

Traditional (fickle) adversarial examples involve finding a small perturbation that does not change an input's true label but confuses the classifier into outputting a different prediction.

Contrastive Learning Natural Language Inference +1

Pointwise Paraphrase Appraisal is Potentially Problematic

no code implementations ACL 2020 Hannah Chen, Yangfeng Ji, David Evans

The prevailing approach for training and evaluating paraphrase identification models is constructed as a binary classification problem: the model is given a pair of sentences, and is judged by how accurately it classifies pairs as either paraphrases or non-paraphrases.

Binary Classification Paraphrase Identification

Cannot find the paper you are looking for? You can Submit a new open access paper.