Search Results for author: Alessandro Lenci

Found 31 papers, 7 papers with code

Looking for a Role for Word Embeddings in Eye-Tracking Features Prediction: Does Semantic Similarity Help?

no code implementations IWCS (ACL) 2021 Lavinia Salicchi, Alessandro Lenci, Emmanuele Chersoni

Eye-tracking psycholinguistic studies have suggested that context-word semantic coherence and predictability influence language processing during the reading activity.

Semantic Similarity Semantic Textual Similarity +1

From Speed to Car and Back: An Exploratory Study about Associations between Abstract Nouns and Images

no code implementations CLASP 2022 Ludovica Cerini, Eliana Di Palma, Alessandro Lenci

Abstract concepts, notwithstanding their lack of physical referents in real world, are grounded in sensorimotor experience.

A Comparison between Named Entity Recognition Models in the Biomedical Domain

1 code implementation TRITON 2021 Maria Carmela Cariello, Alessandro Lenci, Ruslan Mitkov

The domain-specialised application of Named Entity Recognition (NER) is known as Biomedical NER (BioNER), which aims to identify and classify biomedical concepts that are of interest to researchers, such as genes, proteins, chemical compounds, drugs, mutations, diseases, and so on.

named-entity-recognition Named Entity Recognition +2

Decoding Word Embeddings with Brain-Based Semantic Features

no code implementations CL (ACL) 2021 Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci

For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.

Retrieval Word Embeddings

PIHKers at CMCL 2021 Shared Task: Cosine Similarity and Surprisal to Predict Human Reading Patterns.

no code implementations NAACL (CMCL) 2021 Lavinia Salicchi, Alessandro Lenci

Eye-tracking psycholinguistic studies have revealed that context-word semantic coherence and predictability influence language processing.

regression

A howling success or a working sea? Testing what BERT knows about metaphors

no code implementations EMNLP (BlackboxNLP) 2021 Paolo Pedinotti, Eliana Di Palma, Ludovica Cerini, Alessandro Lenci

Metaphor is a widespread linguistic and cognitive phenomenon that is ruled by mechanisms which have received attention in the literature.

Attribute

Pragmatic and Logical Inferences in NLI Systems: The Case of Conjunction Buttressing

no code implementations NAACL (unimplicit) 2022 Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci

An intelligent system is expected to perform reasonable inferences, accounting for both the literal meaning of a word and the meanings a word can acquire in different contexts.

Does BERT Recognize an Agent? Modeling Dowty’s Proto-Roles with Contextual Embeddings

no code implementations COLING 2022 Mattia Proietti, Gianluca Lebani, Alessandro Lenci

Contextual embeddings build multidimensional representations of word tokens based on their context of occurrence.

Comparing Plausibility Estimates in Base and Instruction-Tuned Large Language Models

no code implementations21 Mar 2024 Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, Anna A. Ivanova

Experiment 1 shows that, across model architectures and plausibility datasets, (i) log likelihood ($\textit{LL}$) scores are the most reliable indicator of sentence plausibility, with zero-shot prompting yielding inconsistent and typically poor results; (ii) $\textit{LL}$-based performance is still inferior to human performance; (iii) instruction-tuned models have worse $\textit{LL}$-based performance than base models.

Sentence

Agentività e telicità in GilBERTo: implicazioni cognitive

no code implementations6 Jul 2023 Agnese Lombardi, Alessandro Lenci

The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics and use this information for the completion of morphosyntactic patterns.

Language Modelling

Understanding Natural Language Understanding Systems. A Critical Analysis

no code implementations1 Mar 2023 Alessandro Lenci

I contend that they incorporate important aspects of the way language is learnt and processed by humans, but at the same time they lack key interpretive and inferential skills that it is unlikely they can attain unless they are integrated with structured knowledge and the ability to exploit it for language use.

Natural Language Understanding

Event knowledge in large language models: the gap between the impossible and the unlikely

1 code implementation2 Dec 2022 Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, Alessandro Lenci

Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.

Sentence World Knowledge

Word Order Matters when you Increase Masking

no code implementations8 Nov 2022 Karim Lasri, Alessandro Lenci, Thierry Poibeau

We find that the necessity of position information increases with the amount of masking, and that masked language models without position encodings are not able to reconstruct this information on the task.

Language Modelling Position +1

Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT

no code implementations COLING 2022 Karim Lasri, Olga Seminck, Alessandro Lenci, Thierry Poibeau

We compare the performance of BERT-base to that of humans, obtained with a psycholinguistic online crowdsourcing experiment.

Probing for the Usage of Grammatical Number

no code implementations ACL 2022 Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, Ryan Cotterell

We also find that BERT uses a separate encoding of grammatical number for nouns and verbs.

Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task

no code implementations Findings (ACL) 2022 Karim Lasri, Alessandro Lenci, Thierry Poibeau

Although transformer-based Neural Language Models demonstrate impressive performance on a variety of tasks, their generalization abilities are not well understood.

A comparative evaluation and analysis of three generations of Distributional Semantic Models

1 code implementation20 May 2021 Alessandro Lenci, Magnus Sahlgren, Patrick Jeuniaux, Amaru Cuba Gyllensten, Martina Miliani

In this paper, we perform a comprehensive evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT.

Don't Invite BERT to Drink a Bottle: Modeling the Interpretation of Metonymies Using BERT and Distributional Representations

no code implementations COLING 2020 Paolo Pedinotti, Alessandro Lenci

The results reveal that, while BERT ability to deal with metonymy is quite limited, SDM is good at predicting the meaning of metonymic expressions, providing support for an account of metonymy based on event knowledge.

PISA: A measure of Preference In Selection of Arguments to model verb argument recoverability

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Giulia Cappelli, Alessandro Lenci

Our paper offers a computational model of the semantic recoverability of verb arguments, tested in particular on direct objects and Instruments.

A Structured Distributional Model of Sentence Meaning and Processing

no code implementations17 Jun 2019 Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.

Sentence Word Embeddings

Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?

no code implementations WS 2017 Emmanuele Chersoni, Enrico Santus, Philippe Blache, Alessandro Lenci

Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations.

Measuring Thematic Fit with Distributional Feature Overlap

1 code implementation EMNLP 2017 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache

In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments.

The Effects of Data Size and Frequency Range on Distributional Semantic Models

no code implementations EMNLP 2016 Magnus Sahlgren, Alessandro Lenci

This paper investigates the effects of data size and frequency range on distributional semantic models.

Testing APSyn against Vector Cosine on Similarity Estimation

no code implementations PACLIC 2016 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache

In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.

Word Embeddings

Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity

no code implementations EMNLP 2016 Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.

Sentence

Unsupervised Measure of Word Similarity: How to Outperform Co-occurrence and Vector Cosine in VSMs

no code implementations30 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.

Word Similarity

Nine Features in a Random Forest to Learn Taxonomical Semantic Relations

1 code implementation LREC 2016 Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang

When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.

General Classification

What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL Datasets

no code implementations LREC 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.

Word Similarity

ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms

no code implementations29 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.