2 code implementations • NAACL 2019 • Alex Erdmann, er, David Joseph Wrisley, Benjamin Allen, Christopher Brown, Sophie Cohen-Bod{\'e}n{\`e}s, Micha Elsner, Yukun Feng, Brian Joseph, B{\'e}atrice Joyeux-Prunel, Marie-Catherine de Marneffe
Scholars in inter-disciplinary fields like the Digital Humanities are increasingly interested in semantic annotation of specialized corpora.
1 code implementation • EMNLP 2017 • Sandesh Swamy, Alan Ritter, Marie-Catherine de Marneffe
Social media users often make explicit predictions about upcoming events.
1 code implementation • 2 Jul 2021 • Nanjiang Jiang, Marie-Catherine de Marneffe
We investigate how well BERT performs on predicting factuality in several existing English datasets, encompassing various linguistic constructions.
1 code implementation • 7 Sep 2022 • Nan-Jiang Jiang, Marie-Catherine de Marneffe
We investigate how disagreement in natural language inference (NLI) annotation arises.
1 code implementation • 20 Oct 2023 • Nan-Jiang Jiang, Chenhao Tan, Marie-Catherine de Marneffe
Human label variation, or annotation disagreement, exists in many natural language processing (NLP) tasks, including natural language inference (NLI).
no code implementations • EMNLP 2017 • S Swamy, esh, Alan Ritter, Marie-Catherine de Marneffe
Social media users often make explicit predictions about upcoming events.
no code implementations • CONLL 2017 • Daniel Zeman, Martin Popel, Milan Straka, Jan Haji{\v{c}}, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkov{\'a}, Jan Haji{\v{c}} jr., Jaroslava Hlav{\'a}{\v{c}}ov{\'a}, V{\'a}clava Kettnerov{\'a}, Zde{\v{n}}ka Ure{\v{s}}ov{\'a}, Jenna Kanerva, Stina Ojala, Anna Missil{\"a}, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, H{\'e}ctor Mart{\'\i}nez Alonso, {\c{C}}a{\u{g}}r{\i} {\c{C}}{\"o}ltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, M, Michael l, Jesse Kirchner, Hector Fern Alcalde, ez, Jana Strnadov{\'a}, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendon{\c{c}}a, L, Tatiana o, Rattima Nitisaroj, Josie Li
The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets.
no code implementations • WS 2018 • Jackson Luken, Nanjiang Jiang, Marie-Catherine de Marneffe
This paper describes our system submission to the 2018 Fact Extraction and VERification (FEVER) shared task.
no code implementations • WS 2017 • Taylor Mahler, Willy Cheung, Micha Elsner, David King, Marie-Catherine de Marneffe, Cory Shain, Symon Stevens-Guille, Michael White
This paper describes our {``}breaker{''} submission to the 2017 EMNLP {``}Build It Break It{''} shared task on sentiment analysis.
no code implementations • WS 2016 • Benjamin Strauss, Bethany Toma, Alan Ritter, Marie-Catherine de Marneffe, Wei Xu
This paper presents the results of the Twitter Named Entity Recognition shared task associated with W-NUT 2016: a named entity tagging task with 10 teams participating.
no code implementations • WS 2016 • Alex Erdmann, er, Christopher Brown, Brian Joseph, Mark Janse, Petra Ajaka, Micha Elsner, Marie-Catherine de Marneffe
Although spanning thousands of years and genres as diverse as liturgy, historiography, lyric and other forms of prose and poetry, the body of Latin texts is still relatively sparse compared to English.
no code implementations • LREC 2014 • Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, Christopher D. Manning
Revisiting the now de facto standard Stanford dependency representation, we propose an improved taxonomy to capture grammatical relations across languages, including morphologically rich ones.
no code implementations • LREC 2014 • Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Chris Manning
This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for English informal text genres.
no code implementations • LREC 2016 • Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Haji{\v{c}}, Christopher D. Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman
Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments.
no code implementations • ACL 2019 • Nanjiang Jiang, Marie-Catherine de Marneffe
Here, we explore the hypothesis that linguistic deficits drive the error patterns of existing speaker commitment models by analyzing the linguistic correlates of model error on a challenging naturalistic dataset.
no code implementations • IJCNLP 2019 • Nanjiang Jiang, Marie-Catherine de Marneffe
Natural language inference (NLI) datasets (e. g., MultiNLI) were collected by soliciting hypotheses for a given premise from annotators.
no code implementations • LREC 2020 • Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, Daniel Zeman
Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework.
1 code implementation • NAACL 2021 • Xinliang Frederick Zhang, Marie-Catherine de Marneffe
Natural language inference (NLI) is the task of determining whether a piece of text is entailed, contradicted by or unrelated to another piece of text.
no code implementations • COLING 2020 • Ahmad Aljanaideh, Eric Fosler-Lussier, Marie-Catherine de Marneffe
In this work, we introduce a model which leverages the pre-trained BERT model to cluster contextualized representations of a word based on (1) the context in which the word appears and (2) the labels of items the word occurs in.
no code implementations • 24 Apr 2023 • Nan-Jiang Jiang, Chenhao Tan, Marie-Catherine de Marneffe
Human label variation (Plank 2022), or annotation disagreement, exists in many natural language processing (NLP) tasks.
no code implementations • 4 Mar 2024 • Leon Weber-Genzel, Siyao Peng, Marie-Catherine de Marneffe, Barbara Plank
To fill this gap, we introduce a systematic methodology and a new dataset, VariErr (variation versus error), focusing on the NLI task in English.