no code implementations • 11 Oct 2022 • Juntao Yu, Silviu Paun, Maris Camilleri, Paloma Carretero Garcia, Jon Chamberlain, Udo Kruschwitz, Massimo Poesio
Although several datasets annotated for anaphoric reference/coreference exist, even the largest such datasets have limitations in terms of size, range of domains, coverage of anaphoric phenomena, and size of documents included.
no code implementations • SEMEVAL 2021 • Alexandra Uma, Tommaso Fornaciari, Anca Dumitrache, Tristan Miller, Jon Chamberlain, Barbara Plank, Edwin Simpson, Massimo Poesio
Disagreement between coders is ubiquitous in virtually all datasets annotated with human judgements in both natural language processing and computer vision.
no code implementations • 31 Dec 2020 • Alba García Seco De Herrera, Rukiye Savran Kiziltepe, Jon Chamberlain, Mihai Gabriel Constantin, Claire-Hélène Demarty, Faiyaz Doctor, Bogdan Ionescu, Alan F. Smeaton
This paper describes the MediaEval 2020 \textit{Predicting Media Memorability} task.
no code implementations • LREC 2020 • Jon Chamberlain, Udo Kruschwitz, Massimo Poesio
Crowdsourcing approaches provide a difficult design challenge for developers.
no code implementations • LREC 2020 • Osman Doruk Kicikoglu, Richard Bartle, Jon Chamberlain, Silviu Paun, Massimo Poesio
As the uses of Games-With-A-Purpose (GWAPs) broadens, the systems that incorporate its usages have expanded in complexity.
no code implementations • LREC 2020 • Liang Xu, Jon Chamberlain
Errors commonly exist in machine-generated documents and publication materials; however, some correction algorithms do not perform well for complex errors and it is costly to employ humans to do the task.
no code implementations • 25 Sep 2019 • Silviu Paun, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Massimo Poesio
The model is also flexible enough to be used in standard annotation tasks for classification where it registers on par performance with the state of the art.
1 code implementation • ACL 2019 • Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun, Massimo Poesio
One of the key steps in language resource creation is the identification of the text segments to be annotated, or markables, which depending on the task may vary from nominal chunks for named entity resolution to (potentially nested) noun phrases in coreference resolution (or mentions) to larger text segments in text segmentation.
no code implementations • NAACL 2019 • Massimo Poesio, Jon Chamberlain, Silviu Paun, Juntao Yu, Alex Uma, ra, Udo Kruschwitz
The corpus, containing annotations for about 108, 000 markables, is one of the largest corpora for coreference for English, and one of the largest crowdsourced NLP corpora, but its main feature is the large number of judgments per markable: 20 on average, and over 2. 2M in total.
no code implementations • EMNLP 2018 • Silviu Paun, Jon Chamberlain, Udo Kruschwitz, Juntao Yu, Massimo Poesio
The availability of large scale annotated corpora for coreference is essential to the development of the field.
no code implementations • TACL 2018 • Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, Massimo Poesio
We evaluate these models along four aspects: comparison to gold labels, predictive accuracy for new annotations, annotator characterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators.
no code implementations • LREC 2016 • Jon Chamberlain, Massimo Poesio, Udo Kruschwitz
Corpora are typically annotated by several experts to create a gold standard; however, there are now compelling reasons to use a non-expert crowd to annotate text, driven by cost, speed and scalability.