Search Results for author: Enrico Santus

Found 38 papers, 14 papers with code

CMCL 2022 Shared Task on Multilingual and Crosslingual Prediction of Human Reading Behavior

no code implementations CMCL (ACL) 2022 Nora Hollenstein, Emmanuele Chersoni, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

We present the second shared task on eye-tracking data prediction of the Cognitive Modeling and Computational Linguistics Workshop (CMCL).

Decoding Word Embeddings with Brain-Based Semantic Features

no code implementations CL (ACL) 2021 Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci

For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.

Word Embeddings

CMCL 2021 Shared Task on Eye-Tracking Prediction

no code implementations NAACL (CMCL) 2021 Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).

The CogALex Shared Task on Monolingual and Multilingual Identification of Semantic Relations

no code implementations COLING (CogALex) 2020 Rong Xiang, Emmanuele Chersoni, Luca Iacoponi, Enrico Santus

One containing pairs for each of the training languages (systems were evaluated in a monolingual fashion) and the other proposing a surprise language to test the crosslingual transfer capabilities of the systems.

NADE: A Benchmark for Robust Adverse Drug Events Extraction in Face of Negations

1 code implementation WNUT (ACL) 2021 Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra

Adverse Drug Event (ADE) extraction models can rapidly examine large collections of social media texts, detecting mentions of drug-related adverse reactions and trigger medical investigations.

Negation Detection

Deciphering Undersegmented Ancient Scripts Using Phonetic Prior

1 code implementation21 Oct 2020 Jiaming Luo, Frederik Hartmann, Enrico Santus, Yuan Cao, Regina Barzilay

We evaluate the model on both deciphered languages (Gothic, Ugaritic) and an undeciphered one (Iberian).

Decipherment

Distilling the Evidence to Augment Fact Verification Models

no code implementations WS 2020 Beatrice Portelli, Jason Zhao, Tal Schuster, Giuseppe Serra, Enrico Santus

We propose, instead, a model-agnostic framework that consists of two modules: (1) a span extractor, which identifies the crucial information connecting claim and evidence; and (2) a classifier that combines claim, evidence, and the extracted spans to predict the veracity of the claim.

Fact Verification

Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?

no code implementations LREC 2020 Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Aless Lenci, ro, Chu-Ren Huang

While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models.

Word Embeddings

A Structured Distributional Model of Sentence Meaning and Processing

no code implementations17 Jun 2019 Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.

Word Embeddings

IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation

3 code implementations IJCNLP 2019 Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus

Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content.

Style Transfer Text Attribute Transfer +2

GraphIE: A Graph-Based Framework for Information Extraction

2 code implementations NAACL 2019 Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, Regina Barzilay

Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies.

Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?

no code implementations WS 2017 Emmanuele Chersoni, Enrico Santus, Philippe Blache, Alessandro Lenci

Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations.

Measuring Thematic Fit with Distributional Feature Overlap

1 code implementation EMNLP 2017 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache

In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments.

German in Flux: Detecting Metaphoric Change via Word Entropy

1 code implementation CONLL 2017 Dominik Schlechtweg, Stefanie Eckmann, Enrico Santus, Sabine Schulte im Walde, Daniel Hole

This paper explores the information-theoretic measure entropy to detect metaphoric change, transferring ideas from hypernym detection to research on language change.

Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

1 code implementation EACL 2017 Vered Shwartz, Enrico Santus, Dominik Schlechtweg

The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.

Hypernym Discovery

The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations

no code implementations WS 2016 Enrico Santus, Anna Gladkova, Stefan Evert, Aless Lenci, ro

The task is split into two subtasks: (i) identification of related word pairs vs. unrelated ones; (ii) classification of the word pairs according to their semantic relation.

Language Acquisition Paraphrase Generation

CogALex-V Shared Task: ROOT18

no code implementations WS 2016 Emmanuele Chersoni, Giulia Rambelli, Enrico Santus

Our classifier participated in the CogALex-V Shared Task, showing a solid performance on the first subtask, but a poor performance on the second subtask.

Testing APSyn against Vector Cosine on Similarity Estimation

no code implementations PACLIC 2016 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache

In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.

Word Embeddings

Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity

no code implementations EMNLP 2016 Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.

Unsupervised Measure of Word Similarity: How to Outperform Co-occurrence and Vector Cosine in VSMs

no code implementations30 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.

Word Similarity

ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms

no code implementations29 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.

General Classification

What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL Datasets

no code implementations LREC 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.

Word Similarity

Nine Features in a Random Forest to Learn Taxonomical Semantic Relations

1 code implementation LREC 2016 Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang

When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.