1 code implementation • NAACL (SocialNLP) 2022 • Fatma Elsafoury, Steven R. Wilson, Naeem Ramzan
In recent years, gray social media platforms, those with a loose moderation policy on cyberbullying, have been attracting more users.
no code implementations • ACL 2022 • Fatma Elsafoury
Finally, I investigate the causal effect of the social and intersectional bias on the performance and unfairness of hate speech detection models.
no code implementations • COLING 2022 • Fatma Elsafoury, Steve R. Wilson, Stamos Katsigiannis, Naeem Ramzan
Systematic Offensive stereotyping (SOS) in word embeddings could lead to associating marginalised groups with hate speech and profanity, which might lead to blocking and silencing those groups, especially on social media platforms.
no code implementations • 31 Aug 2023 • Fatma Elsafoury
This paper is a summary of the work done in my PhD thesis.
no code implementations • 21 Aug 2023 • Fatma Elsafoury
Finally, we investigate the impact of the SOS bias in LMs on their performance and fairness on hate speech detection.
no code implementations • 22 May 2023 • Fatma Elsafoury, Stamos Katsigiannis
Even though there is evidence that language models are biased, the impact of that bias on the fairness of downstream NLP tasks is still understudied.
no code implementations • 16 May 2023 • Fatma Elsafoury, Gavin Abercrombie
In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years.