1 code implementation • 6 Oct 2022 • Agostina Calabrese, Björn Ross, Mirella Lapata
To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators.
1 code implementation • ACM Web Science 2021 • Agostina Calabrese, Michele Bevilacqua, Björn Ross, Rocco Tripodi, Roberto Navigli
In this work, we introduce Adversarial Attacks against Abuse (AAA), a new evaluation strategy and associated metric that better captures a model’s performance on certain classes of hard-to-classify microposts, and for example penalises systems which are biased on low-level lexical features.
Ranked #1 on Hate Speech Detection on Waseem et al., 2018
no code implementations • ACL 2020 • Agostina Calabrese, Michele Bevilacqua, Roberto Navigli
Thanks to the wealth of high-quality annotated images available in popular repositories such as ImageNet, multimodal language-vision research is in full bloom.