1 code implementation • 30 Mar 2024 • Hannah Chen, Yangfeng Ji, David Evans
Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics.
1 code implementation • 20 Oct 2022 • Hannah Chen, Yangfeng Ji, David Evans
Traditional (fickle) adversarial examples involve finding a small perturbation that does not change an input's true label but confuses the classifier into outputting a different prediction.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hannah Chen, Yangfeng Ji, David Evans
Most NLP datasets are manually labeled, so suffer from inconsistent labeling or limited size.
no code implementations • ACL 2020 • Hannah Chen, Yangfeng Ji, David Evans
The prevailing approach for training and evaluating paraphrase identification models is constructed as a binary classification problem: the model is given a pair of sentences, and is judged by how accurately it classifies pairs as either paraphrases or non-paraphrases.