Annotating Online Misogyny

ACL 2021  ·  Philine Zeinert, Nanna Inie, Leon Derczynski ·

Online misogyny, a category of online abusive language, has serious and harmful social consequences. Automatic detection of misogynistic language online, while imperative, poses complicated challenges to both data gathering, data annotation, and bias mitigation, as this type of data is linguistically complex and diverse. This paper makes three contributions in this area: Firstly, we describe the detailed design of our iterative annotation process and codebook. Secondly, we present a comprehensive taxonomy of labels for annotating misogyny in natural written language, and finally, we introduce a high-quality dataset of annotated posts sampled from social media posts.

PDF Abstract

Datasets


Introduced in the Paper:

bajer_danish_misogyny
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Hate Speech Detection bajer_danish_misogyny AOM mBERT F1 0.8549 # 1

Methods


No methods listed for this paper. Add relevant methods here