no code implementations • NAACL (CMCL) 2021 • Lavinia Salicchi, Alessandro Lenci
Eye-tracking psycholinguistic studies have revealed that context-word semantic coherence and predictability influence language processing.
no code implementations • COLING 2022 • Mattia Proietti, Gianluca Lebani, Alessandro Lenci
Contextual embeddings build multidimensional representations of word tokens based on their context of occurrence.
1 code implementation • TRITON 2021 • Maria Carmela Cariello, Alessandro Lenci, Ruslan Mitkov
The domain-specialised application of Named Entity Recognition (NER) is known as Biomedical NER (BioNER), which aims to identify and classify biomedical concepts that are of interest to researchers, such as genes, proteins, chemical compounds, drugs, mutations, diseases, and so on.
no code implementations • EMNLP (BlackboxNLP) 2021 • Paolo Pedinotti, Eliana Di Palma, Ludovica Cerini, Alessandro Lenci
Metaphor is a widespread linguistic and cognitive phenomenon that is ruled by mechanisms which have received attention in the literature.
no code implementations • CLASP 2022 • Ludovica Cerini, Eliana Di Palma, Alessandro Lenci
Abstract concepts, notwithstanding their lack of physical referents in real world, are grounded in sensorimotor experience.
no code implementations • NAACL (unimplicit) 2022 • Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci
An intelligent system is expected to perform reasonable inferences, accounting for both the literal meaning of a word and the meanings a word can acquire in different contexts.
no code implementations • CL (ACL) 2021 • Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci
For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.
no code implementations • IWCS (ACL) 2021 • Lavinia Salicchi, Alessandro Lenci, Emmanuele Chersoni
Eye-tracking psycholinguistic studies have suggested that context-word semantic coherence and predictability influence language processing during the reading activity.
no code implementations • 10 Dec 2024 • Philippe Blache, Emmanuele Chersoni, Giulia Rambelli, Alessandro Lenci
We present in this paper an approach based on Construction Grammars and completing this framework in order to account for these different mechanisms.
no code implementations • 30 Jul 2024 • Serena Auriemma, Martina Miliani, Mauro Madeddu, Alessandro Bondielli, Lucia Passaro, Alessandro Lenci
Our study concentrates on the Italian bureaucratic and legal language, experimenting with both general-purpose and further pre-trained encoder-only models.
1 code implementation • 21 Mar 2024 • Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, Anna A. Ivanova
Semantic plausibility (e. g. knowing that "the actor won the award" is more likely than "the actor won the battle") serves as an effective proxy for general world knowledge.
no code implementations • 6 Jul 2023 • Agnese Lombardi, Alessandro Lenci
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics and use this information for the completion of morphosyntactic patterns.
no code implementations • 1 Mar 2023 • Alessandro Lenci
I contend that they incorporate important aspects of the way language is learnt and processed by humans, but at the same time they lack key interpretive and inferential skills that it is unlikely they can attain unless they are integrated with structured knowledge and the ability to exploit it for language use.
1 code implementation • 2 Dec 2022 • Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, Alessandro Lenci
Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.
no code implementations • 8 Nov 2022 • Karim Lasri, Alessandro Lenci, Thierry Poibeau
We find that the necessity of position information increases with the amount of masking, and that masked language models without position encodings are not able to reconstruct this information on the task.
no code implementations • COLING 2022 • Karim Lasri, Olga Seminck, Alessandro Lenci, Thierry Poibeau
We compare the performance of BERT-base to that of humans, obtained with a psycholinguistic online crowdsourcing experiment.
no code implementations • ACL 2022 • Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, Ryan Cotterell
We also find that BERT uses a separate encoding of grammatical number for nouns and verbs.
no code implementations • Findings (ACL) 2022 • Karim Lasri, Alessandro Lenci, Thierry Poibeau
Although transformer-based Neural Language Models demonstrate impressive performance on a variety of tasks, their generalization abilities are not well understood.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2021 • Paolo Pedinotti, Giulia Rambelli, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache
Prior research has explored the ability of computational models to predict a word semantic fit with a given predicate.
1 code implementation • 20 May 2021 • Alessandro Lenci, Magnus Sahlgren, Patrick Jeuniaux, Amaru Cuba Gyllensten, Martina Miliani
In this paper, we perform a comprehensive evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT.
no code implementations • COLING 2020 • Paolo Pedinotti, Alessandro Lenci
The results reveal that, while BERT ability to deal with metonymy is quite limited, SDM is good at predicting the meaning of metonymic expressions, providing support for an account of metonymy based on event knowledge.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2020 • Giulia Cappelli, Alessandro Lenci
Our paper offers a computational model of the semantic recoverability of verb arguments, tested in particular on direct objects and Instruments.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Giulia Rambelli, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e. g.
no code implementations • 17 Jun 2019 • Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.
no code implementations • WS 2017 • Emmanuele Chersoni, Enrico Santus, Philippe Blache, Alessandro Lenci
Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations.
1 code implementation • EMNLP 2017 • Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache
In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments.
no code implementations • EMNLP 2016 • Magnus Sahlgren, Alessandro Lenci
This paper investigates the effects of data size and frequency range on distributional semantic models.
no code implementations • PACLIC 2016 • Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache
In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.
no code implementations • EMNLP 2016 • Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.
no code implementations • 30 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.
no code implementations • 29 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.
1 code implementation • LREC 2016 • Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang
When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.
no code implementations • LREC 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.