Search Results for author: Lucia Passaro

Found 8 papers, 3 papers with code

Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset

1 code implementation NLPerspectives (LREC) 2022 Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri

Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.

Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems

no code implementations25 Jun 2025 Benedetta Muscato, Lucia Passaro, Gizem Gezici, Fosca Giannotti

In the realm of Natural Language Processing (NLP), common approaches for handling human disagreement consist of aggregating annotators' viewpoints to establish a single ground truth.

Abusive Language Stance Detection +2

Embracing Diversity: A Multi-Perspective Approach with Soft Labels

no code implementations1 Mar 2025 Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti, Tommaso Cucinotta

Prior studies show that adopting the annotation diversity shaped by different backgrounds and life experiences and incorporating them into the model learning, i. e. multi-perspective approach, contribute to the development of more responsible models.

Diversity Stance Detection

All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark

no code implementations24 Feb 2025 Davide Testa, Giovanni Bonetta, Raffaella Bernardi, Alessandro Bondielli, Alessandro Lenci, Alessio Miaschi, Lucia Passaro, Bernardo Magnini

We introduce MAIA (Multimodal AI Assessment), a native-Italian benchmark designed for fine-grained investigation of the reasoning abilities of visual language models on videos.

All Multimodal Reasoning +2

Multi-Perspective Stance Detection

1 code implementation13 Nov 2024 Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti

Subjective NLP tasks usually rely on human annotations provided by multiple annotators, whose judgments may vary due to their diverse backgrounds and life experiences.

Classification Diversity +1

Prompting Encoder Models for Zero-Shot Classification: A Cross-Domain Study in Italian

no code implementations30 Jul 2024 Serena Auriemma, Martina Miliani, Mauro Madeddu, Alessandro Bondielli, Lucia Passaro, Alessandro Lenci

Our study concentrates on the Italian bureaucratic and legal language, experimenting with both general-purpose and further pre-trained encoder-only models.

Document Classification Entity Typing +3

Continually Learn to Map Visual Concepts to Large Language Models in Resource-constrained Environments

no code implementations11 Jul 2024 Clea Rebillard, Julio Hurtado, Andrii Krutsylo, Lucia Passaro, Vincenzo Lomonaco

This work proposes Continual Visual Mapping (CVM), an approach that continually ground vision representations to a knowledge space extracted from a fixed Language model.

Continual Learning Language Modeling +2

Continual Pre-Training Mitigates Forgetting in Language and Vision

1 code implementation19 May 2022 Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu

We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.

Continual Learning Continual Pretraining

Cannot find the paper you are looking for? You can Submit a new open access paper.