1 code implementation • EMNLP 2020 • Fabio Massimo Zanzotto, Andrea Santilli, Leonardo Ranaldi, Dario Onorati, Pierfrancesco Tommasino, Francesca Fallucchi
Syntactic parsers have dominated natural language understanding for decades.
no code implementations • 12 Feb 2024 • Federico Ranaldi, Elena Sofia Ruzzetti, Dario Onorati, Leonardo Ranaldi, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto
Our results indicate a significant performance drop in GPT-3. 5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.
no code implementations • 15 Nov 2023 • Leonardo Ranaldi, Giulia Pucci
Large Language Models have been demonstrating the ability to solve complex tasks by delivering answers that are positively evaluated by humans due in part to the intensive use of human feedback that refines responses.
no code implementations • 14 Nov 2023 • Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner.
no code implementations • 21 Sep 2023 • Leonardo Ranaldi, Fabio Massimo Zanzotto
Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios.
1 code implementation • 27 Aug 2023 • Leonardo Ranaldi, Giulia Pucci, Andre Freitas
This disparity is demanded in further fine-tuning and affecting the cross-lingual abilities of LLMs.
no code implementations • 23 May 2023 • Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto
In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable.
no code implementations • 8 May 2023 • Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples.
no code implementations • 3 May 2023 • Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto
The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language.
no code implementations • 14 Jan 2022 • Leonardo Ranaldi, Aria Nourbakhsh, Arianna Patrizi, Elena Sofia Ruzzetti, Dario Onorati, Francesca Fallucchi, Fabio Massimo Zanzotto
Pre-trained Transformers are challenging human performances in many NLP tasks.
no code implementations • Findings (ACL) 2022 • Elena Sofia Ruzzetti, Leonardo Ranaldi, Michele Mastromattei, Francesca Fallucchi, Fabio Massimo Zanzotto
In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words.