Search Results for author: Aida Mostafazadeh Davani

Found 10 papers, 2 papers with code

The Moral Foundations Reddit Corpus

no code implementations10 Aug 2022 Jackson Trager, Alireza S. Ziabari, Aida Mostafazadeh Davani, Preni Golazizian, Farzan Karimi-Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy, Nils Karl Reimer, Melissa Reyes, Kelsey Cheng, Mellow Wei, Christina Merrifield, Arta Khosravi, Evans Alvarez, Morteza Dehghani

Moral framing and sentiment can affect a variety of online and offline behaviors, including donation, pro-environmental action, political engagement, and even participation in violent protests.

domain classification Sentiment Analysis +2

Hate Speech Classifiers Learn Human-Like Social Stereotypes

no code implementations28 Oct 2021 Aida Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, Morteza Dehghani

Social stereotypes negatively impact individuals' judgements about different groups and may have a critical role in how people understand language directed toward minority social groups.

Fairness

Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations

no code implementations12 Oct 2021 Aida Mostafazadeh Davani, Mark Díaz, Vinodkumar Prabhakaran

Majority voting and averaging are common approaches employed to resolve annotator disagreements and derive single ground truth labels from multiple annotations.

Binary Classification

On Releasing Annotator-Level Labels and Information in Datasets

no code implementations EMNLP (LAW, DMR) 2021 Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, Mark Díaz

A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single "ground truth" label or score, through majority voting, averaging, or adjudication.

Improving Counterfactual Generation for Fair Hate Speech Detection

no code implementations ACL (WOAH) 2021 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.

counterfactual Fairness +2

On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning

no code implementations NAACL 2021 Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren

Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.

coreference-resolution Fairness +6

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

no code implementations24 Oct 2020 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

Counterfactual token fairness for a mentioned social group evaluates the model's predictions as to whether they are the same for (a) the actual sentence and (b) a counterfactual instance, which is generated by changing the mentioned social group in the sentence.

counterfactual Fairness +2

Contextualizing Hate Speech Classifiers with Post-hoc Explanation

3 code implementations ACL 2020 Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, Xiang Ren

Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways.

Cannot find the paper you are looking for? You can Submit a new open access paper.