Search Results for author: Alexandra Uma

Found 6 papers, 1 papers with code

A Cluster Ranking Model for Full Anaphora Resolution

1 code implementation LREC 2020 Juntao Yu, Alexandra Uma, Massimo Poesio

In this paper, we introduce an architecture to simultaneously identify non-referring expressions (including expletives, predicative s, and other types) and build coreference chains, including singletons.

Coreference Resolution

SemEval-2021 Task 12: Learning with Disagreements

no code implementations SEMEVAL 2021 Alexandra Uma, Tommaso Fornaciari, Anca Dumitrache, Tristan Miller, Jon Chamberlain, Barbara Plank, Edwin Simpson, Massimo Poesio

Disagreement between coders is ubiquitous in virtually all datasets annotated with human judgements in both natural language processing and computer vision.

We Need to Consider Disagreement in Evaluation

no code implementations ACL (BPPF) 2021 Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, Alexandra Uma

Instead, we suggest that we need to better capture the sources of disagreement to improve today’s evaluation practice.

Hard and Soft Evaluation of NLP models with BOOtSTrap SAmpling - BooStSa

no code implementations ACL 2022 Tommaso Fornaciari, Alexandra Uma, Massimo Poesio, Dirk Hovy

Natural Language Processing (NLP) ‘s applied nature makes it necessary to select the most effective and robust models.

Experimental Design

SemEval-2023 Task 11: Learning With Disagreements (LeWiDi)

no code implementations28 Apr 2023 Elisa Leonardelli, Alexandra Uma, Gavin Abercrombie, Dina Almanea, Valerio Basile, Tommaso Fornaciari, Barbara Plank, Verena Rieser, Massimo Poesio

We report on the second LeWiDi shared task, which differs from the first edition in three crucial respects: (i) it focuses entirely on NLP, instead of both NLP and computer vision tasks in its first edition; (ii) it focuses on subjective tasks, instead of covering different types of disagreements-as training with aggregated labels for subjective NLP tasks is a particularly obvious misrepresentation of the data; and (iii) for the evaluation, we concentrate on soft approaches to evaluation.

Sentiment Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.