AAA: Fair Evaluation for Abuse Detection Systems Wanted

User-generated web content is rife with abusive language that can harm others and discourage participation. Thus, a primary research aim is to develop abuse detection systems that can be used to alert and support human moderators of online communities. Such systems are notoriously hard to develop and evaluate. Even when they appear to achieve satisfactory performance on current evaluation metrics, they may fail in practice on new data. This is partly because datasets commonly used in this field suffer from selection bias, and consequently, existing supervised models overrely on cue words such as group identifiers (e.g., gay and black) which are not inherently abusive. Although there are attempts to mitigate this bias, current evaluation metrics do not adequately quantify their progress. In this work, we introduce Adversarial Attacks against Abuse (AAA), a new evaluation strategy and associated metric that better captures a model’s performance on certain classes of hard-to-classify microposts, and for example penalises systems which are biased on low-level lexical features. It does so by adversarially modifying the model developer’s training and test data to generate plausible test samples dynamically. We make AAA available as an easy-to-use tool, and show its effectiveness in error analysis by comparing the AAA performance of several state-of-the-art models on multiple datasets. This work will inform the development of detection systems and contribute to the fight against abusive language online.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Hate Speech Detection Waseem et al., 2018 Mozafari et al., 2019 AAA 50.94 # 1
F1 (micro) 84.42 # 1
Hate Speech Detection Waseem et al., 2018 Kennedy et al., 2020 AAA 45.50 # 3
F1 (micro) 82.18 # 2
Hate Speech Detection Waseem et al., 2018 SVM AAA 46.51 # 2
F1 (micro) 82.18 # 2

Methods


No methods listed for this paper. Add relevant methods here