no code implementations • 3 Jun 2023 • Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, Maarten Sap
To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context.
1 code implementation • 26 May 2020 • Thomas Davidson, Debasmita Bhattacharya
We then use structural topic modeling to examine the content of the tweets and how the prevalence of different topics is related to both abusiveness annotation and dialect prediction.
5 code implementations • WS 2019 • Thomas Davidson, Debasmita Bhattacharya, Ingmar Weber
Technologies for abusive language detection are being developed and applied with little consideration of their potential biases.
1 code implementation • WS 2017 • Zeerak Waseem, Thomas Davidson, Dana Warmsley, Ingmar Weber
As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label.
9 code implementations • 11 Mar 2017 • Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber
We train a multi-class classifier to distinguish between these different categories.