Search Results for author: Jeffrey Sorensen

Found 14 papers, 6 papers with code

SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification

no code implementations SemEval (NAACL) 2022 Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, Jeffrey Sorensen

The paper describes the SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification (MAMI), which explores the detection of misogynous memes on the web by taking advantage of available texts and images.

From the Detection of Toxic Spans in Online Discussions to the Analysis of Toxic-to-Civil Transfer

1 code implementation ACL 2022 John Pavlopoulos, Leo Laugier, Alexandros Xenos, Jeffrey Sorensen, Ion Androutsopoulos

We study the task of toxic spans detection, which concerns the detection of the spans that make a text toxic, when detecting such spans is possible.

Toxic Spans Detection

Lost in Distillation: A Case Study in Toxicity Modeling

no code implementations NAACL (WOAH) 2022 Alyssa Chvasta, Alyssa Lees, Jeffrey Sorensen, Lucy Vasserman, Nitesh Goyal

In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one.

Knowledge Distillation

SemEval-2021 Task 5: Toxic Spans Detection

no code implementations SEMEVAL 2021 John Pavlopoulos, Jeffrey Sorensen, L{\'e}o Laugier, Ion Androutsopoulos

For the supervised sequence labeling approach and evaluation purposes, posts previously labeled as toxic were crowd-annotated for toxic spans.

Toxic Spans Detection

Civil Rephrases Of Toxic Texts With Self-Supervised Transformers

1 code implementation EACL 2021 Leo Laugier, John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon

Platforms that support online commentary, from social networks to news sites, are increasingly leveraging machine learning to assist their moderation efforts.

Denoising Self-Supervised Learning +2

Classifying Constructive Comments

2 code implementations11 Apr 2020 Varada Kolhatkar, Nithum Thain, Jeffrey Sorensen, Lucas Dixon, Maite Taboada

The quality of the annotation scheme and the resulting dataset is evaluated using measurements of inter-annotator agreement, expert assessment of a sample, and by the constructiveness sub-characteristics, which we show provide a proxy for the general constructiveness concept.

Domain Adaptation

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

4 code implementations11 Mar 2019 Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman

Unintended bias in Machine Learning can manifest as systemic differences in performance for different demographic groups, potentially compounding existing challenges to fairness in society at large.

BIG-bench Machine Learning Fairness +2

Cannot find the paper you are looking for? You can Submit a new open access paper.