Search Results for author: Emmanuele Chersoni

Found 41 papers, 9 papers with code

Is Domain Adaptation Worth Your Investment? Comparing BERT and FinBERT on Financial Tasks

no code implementations EMNLP (ECONLP) 2021 Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang

With the recent rise in popularity of Transformer models in Natural Language Processing, research efforts have been dedicated to the development of domain-adapted versions of BERT-like architectures.

Continual Pretraining Domain Adaptation

Looking for a Role for Word Embeddings in Eye-Tracking Features Prediction: Does Semantic Similarity Help?

no code implementations IWCS (ACL) 2021 Lavinia Salicchi, Alessandro Lenci, Emmanuele Chersoni

Eye-tracking psycholinguistic studies have suggested that context-word semantic coherence and predictability influence language processing during the reading activity.

Semantic Similarity Semantic Textual Similarity +1

CMCL 2021 Shared Task on Eye-Tracking Prediction

no code implementations NAACL (CMCL) 2021 Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).

The CogALex Shared Task on Monolingual and Multilingual Identification of Semantic Relations

no code implementations COLING (CogALex) 2020 Rong Xiang, Emmanuele Chersoni, Luca Iacoponi, Enrico Santus

One containing pairs for each of the training languages (systems were evaluated in a monolingual fashion) and the other proposing a surprise language to test the crosslingual transfer capabilities of the systems.

Evaluating Monolingual and Crosslingual Embeddings on Datasets of Word Association Norms

no code implementations LREC (BUCC) 2022 Trina Kwong, Emmanuele Chersoni, Rong Xiang

In free word association tasks, human subjects are presented with a stimulus word and are then asked to name the first word (the response word) that comes up to their mind.

Association Word Embeddings

Discovering Financial Hypernyms by Prompting Masked Language Models

no code implementations FNP (LREC) 2022 Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang

With the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains.

Domain Adaptation Language Modelling +1

Pragmatic and Logical Inferences in NLI Systems: The Case of Conjunction Buttressing

no code implementations NAACL (unimplicit) 2022 Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci

An intelligent system is expected to perform reasonable inferences, accounting for both the literal meaning of a word and the meanings a word can acquire in different contexts.

Lexicon of Changes: Towards the Evaluation of Diachronic Semantic Shift in Chinese

no code implementations LChange (ACL) 2022 Jing Chen, Emmanuele Chersoni, Chu-Ren Huang

Recent research has brought a wind of using computational approaches to the classic topic of semantic change, aiming to tackle one of the most challenging issues in the evolution of human language.

Decoding Word Embeddings with Brain-Based Semantic Features

no code implementations CL (ACL) 2021 Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci

For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.

Retrieval Word Embeddings

Event knowledge in large language models: the gap between the impossible and the unlikely

1 code implementation2 Dec 2022 Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan S. She, Zawad Chowdhury, Evelina Fedorenko, Alessandro Lenci

We conclude by speculating that the differential performance on impossible vs. unlikely events is not a temporary setback but an inherent property of LLMs, reflecting a fundamental difference between linguistic knowledge and world knowledge in intelligent systems.

Increasing Adverse Drug Events extraction robustness on social media: case study on negation and speculation

no code implementations6 Sep 2022 Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra

In the last decade, an increasing number of users have started reporting Adverse Drug Events (ADE) on social media platforms, blogs, and health forums.

NADE: A Benchmark for Robust Adverse Drug Events Extraction in Face of Negations

1 code implementation WNUT (ACL) 2021 Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra

Adverse Drug Event (ADE) extraction models can rapidly examine large collections of social media texts, detecting mentions of drug-related adverse reactions and trigger medical investigations.

Negation Detection

PolyU CBS-Comp at SemEval-2021 Task 1: Lexical Complexity Prediction (LCP)

no code implementations SEMEVAL 2021 Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang

In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.

Lexical Complexity Prediction Word Embeddings

Automatic Learning of Modality Exclusivity Norms with Crosslingual Word Embeddings

no code implementations Joint Conference on Lexical and Computational Semantics 2020 Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang

Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.

Association Word Embeddings

Using Conceptual Norms for Metaphor Detection

no code implementations WS 2020 Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang, Chu-Ren Huang

This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection.

Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?

no code implementations LREC 2020 Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Aless Lenci, ro, Chu-Ren Huang

While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models.

Word Embeddings

Distributional Semantics Meets Construction Grammar. towards a Unified Usage-Based Model of Grammar and Meaning

no code implementations WS 2019 Giulia Rambelli, Emmanuele Chersoni, Philippe Blache, Chu-Ren Huang, Aless Lenci, ro

In this paper, we propose a new type of semantic representation of Construction Grammar that combines constructions with the vector representations used in Distributional Semantics.

A Structured Distributional Model of Sentence Meaning and Processing

no code implementations17 Jun 2019 Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.

Word Embeddings

Modeling Violations of Selectional Restrictions with Distributional Semantics

no code implementations WS 2018 Emmanuele Chersoni, Adri{\`a} Torrens Urrutia, Philippe Blache, Aless Lenci, ro

Distributional Semantic Models have been successfully used for modeling selectional preferences in a variety of scenarios, since distributional similarity naturally provides an estimate of the degree to which an argument satisfies the requirement of a given predicate.

Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?

no code implementations WS 2017 Emmanuele Chersoni, Enrico Santus, Philippe Blache, Alessandro Lenci

Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations.

Logical Metonymy in a Distributional Model of Sentence Comprehension

no code implementations SEMEVAL 2017 Emmanuele Chersoni, Aless Lenci, ro, Philippe Blache

In theoretical linguistics, logical metonymy is defined as the combination of an event-subcategorizing verb with an entity-denoting direct object (e. g., The author began the book), so that the interpretation of the VP requires the retrieval of a covert event (e. g., writing).


Measuring Thematic Fit with Distributional Feature Overlap

1 code implementation EMNLP 2017 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache

In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments.

Towards a Distributional Model of Semantic Complexity

no code implementations WS 2016 Emmanuele Chersoni, Philippe Blache, Aless Lenci, ro

The composition cost of a sentence depends on the semantic coherence of the event being constructed and on the activation degree of the linguistic constructions.

CogALex-V Shared Task: ROOT18

no code implementations WS 2016 Emmanuele Chersoni, Giulia Rambelli, Enrico Santus

Our classifier participated in the CogALex-V Shared Task, showing a solid performance on the first subtask, but a poor performance on the second subtask.

Testing APSyn against Vector Cosine on Similarity Estimation

no code implementations PACLIC 2016 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache

In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.

Word Embeddings

Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity

no code implementations EMNLP 2016 Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.

Cannot find the paper you are looking for? You can Submit a new open access paper.