Search Results for author: Marie-Catherine de Marneffe

Found 34 papers, 6 papers with code

VariErr NLI: Separating Annotation Error from Human Label Variation

no code implementations4 Mar 2024 Leon Weber-Genzel, Siyao Peng, Marie-Catherine de Marneffe, Barbara Plank

To fill this gap, we introduce a systematic methodology and a new dataset, VariErr (variation versus error), focusing on the NLI task in English.


Ecologically Valid Explanations for Label Variation in NLI

1 code implementation20 Oct 2023 Nan-Jiang Jiang, Chenhao Tan, Marie-Catherine de Marneffe

Human label variation, or annotation disagreement, exists in many natural language processing (NLP) tasks, including natural language inference (NLI).

Natural Language Inference valid

He Thinks He Knows Better than the Doctors: BERT for Event Factuality Fails on Pragmatics

1 code implementation2 Jul 2021 Nanjiang Jiang, Marie-Catherine de Marneffe

We investigate how well BERT performs on predicting factuality in several existing English datasets, encompassing various linguistic constructions.

Identifying inherent disagreement in natural language inference

1 code implementation NAACL 2021 Xinliang Frederick Zhang, Marie-Catherine de Marneffe

Natural language inference (NLI) is the task of determining whether a piece of text is entailed, contradicted by or unrelated to another piece of text.

Natural Language Inference

Contextualized Embeddings for Enriching Linguistic Analyses on Politeness

no code implementations COLING 2020 Ahmad Aljanaideh, Eric Fosler-Lussier, Marie-Catherine de Marneffe

In this work, we introduce a model which leverages the pre-trained BERT model to cluster contextualized representations of a word based on (1) the context in which the word appears and (2) the labels of items the word occurs in.

Clustering Word Embeddings

Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection

no code implementations LREC 2020 Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, Daniel Zeman

Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework.

Evaluating BERT for natural language inference: A case study on the CommitmentBank

no code implementations IJCNLP 2019 Nanjiang Jiang, Marie-Catherine de Marneffe

Natural language inference (NLI) datasets (e. g., MultiNLI) were collected by soliciting hypotheses for a given premise from annotators.

Natural Language Inference Negation

Do You Know That Florence Is Packed with Visitors? Evaluating State-of-the-art Models of Speaker Commitment

no code implementations ACL 2019 Nanjiang Jiang, Marie-Catherine de Marneffe

Here, we explore the hypothesis that linguistic deficits drive the error patterns of existing speaker commitment models by analyzing the linguistic correlates of model error on a challenging naturalistic dataset.

Negation Question Answering

Challenges and Solutions for Latin Named Entity Recognition

no code implementations WS 2016 Alex Erdmann, er, Christopher Brown, Brian Joseph, Mark Janse, Petra Ajaka, Micha Elsner, Marie-Catherine de Marneffe

Although spanning thousands of years and genres as diverse as liturgy, historiography, lyric and other forms of prose and poetry, the body of Latin texts is still relatively sparse compared to English.

Active Learning Domain Adaptation +5

Results of the WNUT16 Named Entity Recognition Shared Task

no code implementations WS 2016 Benjamin Strauss, Bethany Toma, Alan Ritter, Marie-Catherine de Marneffe, Wei Xu

This paper presents the results of the Twitter Named Entity Recognition shared task associated with W-NUT 2016: a named entity tagging task with 10 teams participating.

Named Entity Recognition Named Entity Recognition (NER)

A Gold Standard Dependency Corpus for English

no code implementations LREC 2014 Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Chris Manning

This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for English informal text genres.

Sentiment Analysis

Universal Stanford dependencies: A cross-linguistic typology

no code implementations LREC 2014 Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, Christopher D. Manning

Revisiting the now de facto standard Stanford dependency representation, we propose an improved taxonomy to capture grammatical relations across languages, including morphologically rich ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.