Search Results for author: Leonardo Ranaldi

Found 11 papers, 2 papers with code

Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation

no code implementations12 Feb 2024 Federico Ranaldi, Elena Sofia Ruzzetti, Dario Onorati, Leonardo Ranaldi, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto

Our results indicate a significant performance drop in GPT-3. 5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.

Instruction Following Text-To-SQL +1

When Large Language Models contradict humans? Large Language Models' Sycophantic Behaviour

no code implementations15 Nov 2023 Leonardo Ranaldi, Giulia Pucci

Large Language Models have been demonstrating the ability to solve complex tasks by delivering answers that are positively evaluated by humans due in part to the intensive use of human feedback that refines responses.

Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts

no code implementations14 Nov 2023 Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner.

HANS, are you clever? Clever Hans Effect Analysis of Neural Systems

no code implementations21 Sep 2023 Leonardo Ranaldi, Fabio Massimo Zanzotto

Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios.

Decision Making Multiple-choice +1

A Trip Towards Fairness: Bias and De-Biasing in Large Language Models

no code implementations23 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto

In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable.

Fairness

PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

no code implementations8 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples.

Memorization Relation

Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages

no code implementations3 May 2023 Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto

The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.