no code implementations • CMCL (ACL) 2022 • Nora Hollenstein, Emmanuele Chersoni, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
We present the second shared task on eye-tracking data prediction of the Cognitive Modeling and Computational Linguistics Workshop (CMCL).
no code implementations • CL (ACL) 2021 • Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci
For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.
no code implementations • NAACL (CMCL) 2021 • Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).
no code implementations • SMM4H (COLING) 2022 • Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
This paper describes the models developed by the AILAB-Udine team for the SMM4H’22 Shared Task.
no code implementations • COLING (CogALex) 2020 • Rong Xiang, Emmanuele Chersoni, Luca Iacoponi, Enrico Santus
One containing pairs for each of the training languages (systems were evaluated in a monolingual fashion) and the other proposing a surprise language to test the crosslingual transfer capabilities of the systems.
no code implementations • NAACL (unimplicit) 2022 • Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci
An intelligent system is expected to perform reasonable inferences, accounting for both the literal meaning of a word and the meanings a word can acquire in different contexts.
1 code implementation • 8 Jun 2023 • Simone Scaboro, Beatrice Portellia, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
Adverse Event (ADE) extraction is one of the core tasks in digital pharmacovigilance, especially when applied to informal texts.
1 code implementation • 21 Oct 2022 • Beatrice Portelli, Simone Scaboro, Enrico Santus, Hooman Sedghamiz, Emmanuele Chersoni, Giuseppe Serra
Medical term normalization consists in mapping a piece of text to a large number of output classes.
no code implementations • 7 Sep 2022 • Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task.
no code implementations • 6 Sep 2022 • Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
In the last decade, an increasing number of users have started reporting Adverse Drug Events (ADE) on social media platforms, blogs, and health forums.
1 code implementation • WNUT (ACL) 2021 • Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
Adverse Drug Event (ADE) extraction models can rapidly examine large collections of social media texts, detecting mentions of drug-related adverse reactions and trigger medical investigations.
1 code implementation • Findings (EMNLP) 2021 • Hooman Sedghamiz, Shivam Raval, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi
This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.
1 code implementation • Findings (EMNLP) 2021 • Shivam Raval, Hooman Sedghamiz, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi, Emmanuele Chersoni
Adverse Events (AE) are harmful events resulting from the use of medical products.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2021 • Paolo Pedinotti, Giulia Rambelli, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache
Prior research has explored the ability of computational models to predict a word semantic fit with a given predicate.
1 code implementation • 19 May 2021 • Beatrice Portelli, Daniele Passabì, Edoardo Lenzi, Giuseppe Serra, Enrico Santus, Emmanuele Chersoni
In recent years, Internet users are reporting Adverse Drug Events (ADE) on social media, blogs and health forums.
1 code implementation • EACL 2021 • Beatrice Portelli, Edoardo Lenzi, Emmanuele Chersoni, Giuseppe Serra, Enrico Santus
Pretrained transformer-based models, such as BERT and its variants, have become a common choice to obtain state-of-the-art performances in NLP tasks.
1 code implementation • 21 Oct 2020 • Jiaming Luo, Frederik Hartmann, Enrico Santus, Yuan Cao, Regina Barzilay
We evaluate the model on both deciphered languages (Gothic, Ugaritic) and an undeciphered one (Iberian).
no code implementations • WS 2020 • Beatrice Portelli, Jason Zhao, Tal Schuster, Giuseppe Serra, Enrico Santus
We propose, instead, a model-agnostic framework that consists of two modules: (1) a span extractor, which identifies the crucial information connecting claim and evidence; and (2) a classifier that combines claim, evidence, and the extracted spans to predict the veracity of the claim.
no code implementations • LREC 2020 • Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Aless Lenci, ro, Chu-Ren Huang
While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models.
3 code implementations • IJCNLP 2019 • Tal Schuster, Darsh J Shah, Yun Jie Serene Yeo, Daniel Filizzola, Enrico Santus, Regina Barzilay
Fact verification requires validating a claim in the context of evidence.
no code implementations • 17 Jun 2019 • Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.
3 code implementations • IJCNLP 2019 • Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus
Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content.
2 code implementations • NAACL 2019 • Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, Regina Barzilay
Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies.
no code implementations • SEMEVAL 2018 • Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa-Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, Horacio Saggion
This paper describes the SemEval 2018 Shared Task on Hypernym Discovery.
no code implementations • ACL 2018 • Enrico Santus, Hongmin Wang, Emmanuele Chersoni, Yue Zhang
Word Embeddings have recently imposed themselves as a standard for representing word meaning in NLP.
no code implementations • SEMEVAL 2018 • Enrico Santus, Chris Biemann, Emmanuele Chersoni
This paper describes BomJi, a supervised system for capturing discriminative attributes in word pairs (e. g. yellow as discriminative for banana over watermelon).
Ranked #3 on Relation Extraction on SemEval 2018 Task 10
no code implementations • WS 2017 • Emmanuele Chersoni, Enrico Santus, Philippe Blache, Alessandro Lenci
Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations.
1 code implementation • EMNLP 2017 • Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache
In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments.
1 code implementation • CONLL 2017 • Dominik Schlechtweg, Stefanie Eckmann, Enrico Santus, Sabine Schulte im Walde, Daniel Hole
This paper explores the information-theoretic measure entropy to detect metaphoric change, transferring ideas from hypernym detection to research on language change.
1 code implementation • EACL 2017 • Vered Shwartz, Enrico Santus, Dominik Schlechtweg
The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.
Ranked #7 on Hypernym Discovery on Music domain
no code implementations • WS 2016 • Enrico Santus, Anna Gladkova, Stefan Evert, Aless Lenci, ro
The task is split into two subtasks: (i) identification of related word pairs vs. unrelated ones; (ii) classification of the word pairs according to their semantic relation.
no code implementations • WS 2016 • Emmanuele Chersoni, Giulia Rambelli, Enrico Santus
Our classifier participated in the CogALex-V Shared Task, showing a solid performance on the first subtask, but a poor performance on the second subtask.
no code implementations • PACLIC 2016 • Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache
In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.
no code implementations • EMNLP 2016 • Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.
no code implementations • LREC 2016 • Liu Hongchao, Karl Neergaard, Enrico Santus, Chu-Ren Huang
In the 360 word relation pairs, there are 373 relata.
no code implementations • 30 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.
no code implementations • LREC 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.
no code implementations • 29 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.
1 code implementation • LREC 2016 • Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang
When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.