no code implementations • EMNLP (ALW) 2020 • Vinodkumar Prabhakaran, Zeerak Waseem, Seyi Akiwowo, Bertie Vidgen
In 2020 The Workshop on Online Abuse and Harms (WOAH) held a satellite panel at RightsCons 2020, an international human rights conference.
no code implementations • EMNLP (NLP+CSS) 2020 • Bertie Vidgen, Scott Hale, Sam Staton, Tom Melham, Helen Margetts, Ohad Kammar, Marcin Szymczak
We investigate the use of machine learning classifiers for detecting online abuse in empirical research.
no code implementations • ACL (WOAH) 2021 • Lambert Mathias, Shaoliang Nie, Aida Mostafazadeh Davani, Douwe Kiela, Vinodkumar Prabhakaran, Bertie Vidgen, Zeerak Waseem
We present the results and main findings of the shared task at WOAH 5 on hateful memes detection.
1 code implementation • 20 Jun 2022 • Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, Bertie Vidgen
To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models.
no code implementations • 29 Apr 2022 • Leon Derczynski, Hannah Rose Kirk, Abeba Birhane, Bertie Vidgen
Textual data can pose a risk of serious harm.
1 code implementation • 14 Dec 2021 • Paul Röttger, Bertie Vidgen, Dirk Hovy, Janet B. Pierrehumbert
To address this issue, we propose two contrasting paradigms for data annotation.
no code implementations • 15 Sep 2021 • Laila Sprejer, Helen Margetts, Kleber Oliveira, David O'Sullivan, Bertie Vidgen
We show that it is crucial to account for the influencer-level structure, and find evidence of the importance of both influencer- and content-level factors, including the number of followers each influencer has, the type of content (original posts, quotes and replies), the length and toxicity of content, and whether influencers request retweets.
no code implementations • Findings (ACL) 2021 • Austin Botelho, Bertie Vidgen, Scott A. Hale
We show that both text- and visual- enrichment improves model performance, with the multimodal model (0. 771) outperforming other models' F1 scores (0. 544, 0. 737, and 0. 754).
1 code implementation • NAACL 2021 • Bertie Vidgen, Dong Nguyen, Helen Margetts, Patricia Rossini, Rebekah Tromble
Online abuse can inflict harm on users and communities, making online spaces unsafe and toxic.
no code implementations • NAACL 2021 • Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking.
1 code implementation • EACL 2021 • Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, Helen Margetts
Online misogyny is a pernicious social problem that risks making online platforms toxic and unwelcoming to women.
no code implementations • 22 Mar 2021 • Zo Ahmed, Bertie Vidgen, Scott A. Hale
Yet, most research in online hate detection to date has focused on hateful content.
2 code implementations • ACL 2021 • Bertie Vidgen, Tristan Thrush, Zeerak Waseem, Douwe Kiela
We provide a new dataset of ~40, 000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation.
1 code implementation • EMNLP (ALW) 2020 • Bertie Vidgen, Austin Botelho, David Broniatowski, Ella Guest, Matthew Hall, Helen Margetts, Rebekah Tromble, Zeerak Waseem, Scott Hale
The outbreak of COVID-19 has transformed societies across the world as governments tackle the health, economic and social costs of the pandemic.
no code implementations • 3 Apr 2020 • Bertie Vidgen, Leon Derczynski
Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies.
no code implementations • 13 Oct 2019 • Bertie Vidgen, Taha Yasseri, Helen Margetts
Far-right actors are often purveyors of Islamophobic hate speech online, using social media to spread divisive and prejudiced messages which can stir up intergroup tensions and conflict.
Social and Information Networks Computers and Society Physics and Society Applications
1 code implementation • WS 2019 • Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, Helen Margetts
Online abusive content detection is an inherently difficult task.
no code implementations • 12 Dec 2018 • Bertie Vidgen, Taha Yasseri
Islamophobic hate speech on social media inflicts considerable harm on both targeted individuals and wider society, and also risks reputational damage for the host platforms.