Search Results for author: Fabio Massimo Zanzotto

Found 27 papers, 3 papers with code

Every time I fire a conversational designer, the performance of the dialogue system goes down

no code implementations LREC 2022 Giancarlo Xompero, Michele Mastromattei, Samir Salman, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto

In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.

Task-Oriented Dialogue Systems

Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation

no code implementations12 Feb 2024 Federico Ranaldi, Elena Sofia Ruzzetti, Dario Onorati, Leonardo Ranaldi, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto

Our results indicate a significant performance drop in GPT-3. 5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.

Instruction Following Text-To-SQL +1

Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language Models

1 code implementation5 Feb 2024 Michele Mastromattei, Fabio Massimo Zanzotto

This approach maintains model performance while allowing storage of only the optimized subnetwork, leading to significant memory savings.

Density Estimation Network Pruning +1

Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts

no code implementations14 Nov 2023 Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner.

HANS, are you clever? Clever Hans Effect Analysis of Neural Systems

no code implementations21 Sep 2023 Leonardo Ranaldi, Fabio Massimo Zanzotto

Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios.

Decision Making Multiple-choice +1

A Trip Towards Fairness: Bias and De-Biasing in Large Language Models

no code implementations23 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto

In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable.

Fairness

PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

no code implementations8 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples.

Memorization Relation

Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages

no code implementations3 May 2023 Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto

The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language.

Domain Adaptation

Active Informed Consent to Boost the Application of Machine Learning in Medicine

no code implementations27 Sep 2022 Marco Gerardi, Katarzyna Barud, Marie-Catherine Wagner, Nikolaus Forgo, Francesca Fallucchi, Noemi Scarpato, Fiorella Guadagni, Fabio Massimo Zanzotto

In this paper, we present Active Informed Consent (AIC) as a novel hybrid legal-technological tool to foster the gathering of a large amount of data for machine learning.

Every time I fire a conversational designer, the performance of the dialog system goes down

no code implementations27 Sep 2021 Giancarlo A. Xompero, Michele Mastromattei, Samir Salman, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto

Incorporating explicit domain knowledge into neural-based task-oriented dialogue systems is an effective way to reduce the need of large sets of annotated dialogues.

Task-Oriented Dialogue Systems

GASP! Generating Abstracts of Scientific Papers from Abstracts of Cited Papers

1 code implementation28 Feb 2020 Fabio Massimo Zanzotto, Viviana Bono, Paola Vocca, Andrea Santilli, Danilo Croce, Giorgio Gambosi, Roberto Basili

In this paper, we dare to introduce the novel, scientifically and philosophically challenging task of Generating Abstracts of Scientific Papers from abstracts of cited papers (GASP) as a text-to-text task to investigate scientific creativity, To foster research in this novel, challenging task, we prepared a dataset by using services where that solve the problem of copyright and, hence, the dataset is public available with its standard split.

SyntNN at SemEval-2018 Task 2: is Syntax Useful for Emoji Prediction? Embedding Syntactic Trees in Multi Layer Perceptrons

no code implementations SEMEVAL 2018 Fabio Massimo Zanzotto, Andrea Santilli

In this paper, we present SyntNN as a way to include traditional syntactic models in multilayer neural networks used in the task of Semeval Task 2 of emoji prediction.

Task 2

Human-in-the-loop Artificial Intelligence

no code implementations23 Oct 2017 Fabio Massimo Zanzotto

Little by little, newspapers are revealing the bright future that Artificial Intelligence (AI) is building.

Parsing with CYK over Distributed Representations

no code implementations24 May 2017 Fabio Massimo Zanzotto, Giordano Cristini, Giorgio Satta

By showing that CYK can be entirely performed on distributed representations, we open the way to the definition of recurrent layers of CYK-informed neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.