Search Results for author: Thomas Davidson

Found 5 papers, 4 papers with code

COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

no code implementations3 Jun 2023 Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, Maarten Sap

To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context.

Examining Racial Bias in an Online Abuse Corpus with Structural Topic Modeling

1 code implementation26 May 2020 Thomas Davidson, Debasmita Bhattacharya

We then use structural topic modeling to examine the content of the tweets and how the prevalence of different topics is related to both abusiveness annotation and dialect prediction.

Abusive Language

Racial Bias in Hate Speech and Abusive Language Detection Datasets

5 code implementations WS 2019 Thomas Davidson, Debasmita Bhattacharya, Ingmar Weber

Technologies for abusive language detection are being developed and applied with little consideration of their potential biases.

Abuse Detection Abusive Language

Understanding Abuse: A Typology of Abusive Language Detection Subtasks

1 code implementation WS 2017 Zeerak Waseem, Thomas Davidson, Dana Warmsley, Ingmar Weber

As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label.

Abuse Detection Abusive Language

Cannot find the paper you are looking for? You can Submit a new open access paper.