Search Results for author: Bertie Vidgen

Found 27 papers, 12 papers with code

Detecting East Asian Prejudice on Social Media

4 code implementations EMNLP (ALW) 2020 Bertie Vidgen, Austin Botelho, David Broniatowski, Ella Guest, Matthew Hall, Helen Margetts, Rebekah Tromble, Zeerak Waseem, Scott Hale

The outbreak of COVID-19 has transformed societies across the world as governments tackle the health, economic and social costs of the pandemic.

Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection

2 code implementations ACL 2021 Bertie Vidgen, Tristan Thrush, Zeerak Waseem, Douwe Kiela

We provide a new dataset of ~40, 000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation.

Hate Speech Detection

FinanceBench: A New Benchmark for Financial Question Answering

1 code implementation20 Nov 2023 Pranab Islam, Anand Kannappan, Douwe Kiela, Rebecca Qian, Nino Scherrer, Bertie Vidgen

We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2, 400).

Question Answering Retrieval +1

XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

1 code implementation2 Aug 2023 Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, Dirk Hovy

In this paper, we introduce a new test suite called XSTest to identify such eXaggerated Safety behaviours in a systematic way.

Language Modelling

Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models

1 code implementation NAACL (WOAH) 2022 Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, Bertie Vidgen

To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models.

Hate Speech Detection

Introducing CAD: the Contextual Abuse Dataset

1 code implementation NAACL 2021 Bertie Vidgen, Dong Nguyen, Helen Margetts, Patricia Rossini, Rebekah Tromble

Online abuse can inflict harm on users and communities, making online spaces unsafe and toxic.

Detecting weak and strong Islamophobic hate speech on social media

no code implementations12 Dec 2018 Bertie Vidgen, Taha Yasseri

Islamophobic hate speech on social media inflicts considerable harm on both targeted individuals and wider society, and also risks reputational damage for the host platforms.

Word Embeddings

Islamophobes are not all the same! A study of far right actors on Twitter

no code implementations13 Oct 2019 Bertie Vidgen, Taha Yasseri, Helen Margetts

Far-right actors are often purveyors of Islamophobic hate speech online, using social media to spread divisive and prejudiced messages which can stir up intergroup tensions and conflict.

Social and Information Networks Computers and Society Physics and Society Applications

Directions in Abusive Language Training Data: Garbage In, Garbage Out

no code implementations3 Apr 2020 Bertie Vidgen, Leon Derczynski

Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies.

Abusive Language

Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for Multimodal Hate

no code implementations Findings (ACL) 2021 Austin Botelho, Bertie Vidgen, Scott A. Hale

We show that both text- and visual- enrichment improves model performance, with the multimodal model (0. 771) outperforming other models' F1 scores (0. 544, 0. 737, and 0. 754).

An influencer-based approach to understanding radical right viral tweets

no code implementations15 Sep 2021 Laila Sprejer, Helen Margetts, Kleber Oliveira, David O'Sullivan, Bertie Vidgen

We show that it is crucial to account for the influencer-level structure, and find evidence of the importance of both influencer- and content-level factors, including the number of followers each influencer has, the type of content (original posts, quotes and replies), the length and toxicity of content, and whether influencers request retweets.

Online Abuse and Human Rights: WOAH Satellite Session at RightsCon 2020

no code implementations EMNLP (ALW) 2020 Vinodkumar Prabhakaran, Zeerak Waseem, Seyi Akiwowo, Bertie Vidgen

In 2020 The Workshop on Online Abuse and Harms (WOAH) held a satellite panel at RightsCons 2020, an international human rights conference.

Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback

no code implementations9 Mar 2023 Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, Scott A. Hale

Large language models (LLMs) are used to generate content for a wide range of tasks, and are set to reach a growing audience in coming years due to integration in product interfaces like ChatGPT or search engines like Bing.

The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising "Alignment" in Large Language Models

no code implementations3 Oct 2023 Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, Scott A. Hale

In this paper, we address the concept of "alignment" in large language models (LLMs) through the lens of post-structuralist socio-political theory, specifically examining its parallels to empty signifiers.

SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models

no code implementations14 Nov 2023 Bertie Vidgen, Nino Scherrer, Hannah Rose Kirk, Rebecca Qian, Anand Kannappan, Scott A. Hale, Paul Röttger

While some of the models do not give a single unsafe response, most give unsafe responses to more than 20% of the prompts, with over 50% unsafe responses in the extreme.

Cannot find the paper you are looking for? You can Submit a new open access paper.