1 code implementation • EMNLP (ALW) 2020 • Maximilian Wich, Jan Bauer, Georg Groh
One challenge that social media platforms are facing nowadays is hate speech.
no code implementations • EMNLP (ALW) 2020 • Hala Al Kuwatly, Maximilian Wich, Georg Groh
To do so, we sample balanced subsets of data that are labeled by demographically distinct annotators.
1 code implementation • EMNLP (ALW) 2020 • Maximilian Wich, Hala Al Kuwatly, Georg Groh
In the scope of this study, we want to investigate annotator bias — a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception.
no code implementations • RANLP 2021 • Maximilian Wich, Christian Widmer, Gerhard Hagerer, Georg Groh
A prevalent form of bias in hate speech and abusive language datasets is annotator bias caused by the annotator’s subjective perception and the complexity of the annotation task.
no code implementations • NAACL (SocialNLP) 2021 • Edoardo Mosca, Maximilian Wich, Georg Groh
As hate speech spreads on social media and online communities, research continues to work on its automatic detection.
1 code implementation • ICNLSP 2021 • Gerhard Johann Hagerer, David Szabo, Andreas Koch, Maria Luisa Ripoll Dominguez, Christian Widmer, Maximilian Wich, Hannah Danner, Georg Groh
Sentiment analysis is often a crowdsourcing task prone to subjective labels given by many annotators.
no code implementations • 15 Sep 2021 • Maximilian Wich, Adrian Gorniak, Tobias Eder, Daniel Bartmann, Burak Enes Çakici, Georg Groh
Since traditional social media platforms continue to ban actors spreading hate speech or other forms of abusive languages (a process known as deplatforming), these actors migrate to alternative platforms that do not moderate users content.