Search Results for author: Giuseppe Attanasio

Found 21 papers, 19 papers with code

Watching the Watchers: Exposing Gender Disparities in Machine Translation Quality Estimation

1 code implementation14 Oct 2024 Emmanouil Zaranis, Giuseppe Attanasio, Sweta Agrawal, André F. T. Martins

Focusing on out-of-English translations into languages with grammatical gender, we ask: Do contemporary QE metrics exhibit gender bias?

Machine Translation Translation

Twists, Humps, and Pebbles: Multilingual Speech Recognition Models Exhibit Gender Performance Gaps

1 code implementation28 Feb 2024 Giuseppe Attanasio, Beatrice Savoldi, Dennis Fucci, Dirk Hovy

Our findings have implications for the improvement of multilingual ASR systems, underscoring the importance of accessibility to training data and nuanced evaluation to predict and mitigate gender gaps.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions

4 code implementations14 Sep 2023 Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, James Zou

Training large language models to follow instructions makes them perform better on a wide range of tasks and generally become more helpful.

Weigh Your Own Words: Improving Hate Speech Counter Narrative Generation via Attention Regularization

1 code implementation5 Sep 2023 Helena Bonaldi, Giuseppe Attanasio, Debora Nozza, Marco Guerini

Regularized models produce better counter narratives than state-of-the-art approaches in most cases, both in terms of automatic metrics and human evaluation, especially when hateful targets are not present in the training data.

ITALIC: An Italian Intent Classification Dataset

1 code implementation14 Jun 2023 Alkis Koudounas, Moreno La Quatra, Lorenzo Vaiani, Luca Colomba, Giuseppe Attanasio, Eliana Pastor, Luca Cagliero, Elena Baralis

Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects.

Classification intent-classification +4

E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems

1 code implementation20 Apr 2023 Patrick John Chia, Giuseppe Attanasio, Jacopo Tagliabue, Federico Bianchi, Ciro Greco, Gabriel de Souza P. Moreira, Davide Eynard, Fahd Husain

Recommender Systems today are still mostly evaluated in terms of accuracy, with other aspects beyond the immediate relevance of recommendations, such as diversity, long-term user retention and fairness, often taking a back seat.

Diversity Fairness +2

Is It Worth the (Environmental) Cost? Limited Evidence for Temporal Adaptation via Continuous Training

no code implementations13 Oct 2022 Giuseppe Attanasio, Debora Nozza, Federico Bianchi, Dirk Hovy

Consequently, we should continuously update our models with new data to expose them to new events and facts.

ferret: a Framework for Benchmarking Explainers on Transformers

1 code implementation2 Aug 2022 Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza

With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora.

Benchmarking Explainable Artificial Intelligence (XAI) +2

EvalRS: a Rounded Evaluation of Recommender Systems

1 code implementation12 Jul 2022 Jacopo Tagliabue, Federico Bianchi, Tobias Schnabel, Giuseppe Attanasio, Ciro Greco, Gabriel de Souza P. Moreira, Patrick John Chia

Much of the complexity of Recommender Systems (RSs) comes from the fact that they are used as part of more complex applications and affect user experience through a varied range of user interfaces.

Recommendation Systems

Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists

1 code implementation Findings (ACL) 2022 Giuseppe Attanasio, Debora Nozza, Dirk Hovy, Elena Baralis

EAR also reveals overfitting terms, i. e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.

Bias Detection Fairness +1

Contrastive Language-Image Pre-training for the Italian Language

1 code implementation19 Aug 2021 Federico Bianchi, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti, Sri Lakshmi

CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal model that jointly learns representations of images and texts.

Image Retrieval Multi-label zero-shot learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.