1 code implementation • NAACL (WOAH) 2022 • Debora Nozza, Federico Bianchi, Giuseppe Attanasio
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly.
1 code implementation • SemEval (NAACL) 2022 • Giuseppe Attanasio, Debora Nozza, Federico Bianchi
In this paper, we describe the system proposed by the MilaNLP team for the Multimedia Automatic Misogyny Identification (MAMI) challenge.
1 code implementation • nlppower (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy
In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection.
1 code implementation • 18 Oct 2023 • Giuseppe Attanasio, Flor Miriam Plaza-del-Arco, Debora Nozza, Anne Lauscher
In MT, this might lead to misgendered translations, resulting, among other harms, in the perpetuation of stereotypes and prejudices.
no code implementations • 14 Sep 2023 • Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis
Existing work focuses on a few spoken language understanding (SLU) tasks, and explanations are difficult to interpret for most users.
1 code implementation • 14 Sep 2023 • Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, James Zou
Training large language models to follow instructions makes them perform better on a wide range of tasks, generally becoming more helpful.
1 code implementation • 5 Sep 2023 • Helena Bonaldi, Giuseppe Attanasio, Debora Nozza, Marco Guerini
Regularized models produce better counter narratives than state-of-the-art approaches in most cases, both in terms of automatic metrics and human evaluation, especially when hateful targets are not present in the training data.
1 code implementation • 2 Aug 2023 • Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, Dirk Hovy
In this paper, we introduce a new test suite called XSTest to identify such eXaggerated Safety behaviours in a systematic way.
1 code implementation • 14 Jun 2023 • Alkis Koudounas, Moreno La Quatra, Lorenzo Vaiani, Luca Colomba, Giuseppe Attanasio, Eliana Pastor, Luca Cagliero, Elena Baralis
Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects.
1 code implementation • 20 Apr 2023 • Patrick John Chia, Giuseppe Attanasio, Jacopo Tagliabue, Federico Bianchi, Ciro Greco, Gabriel de Souza P. Moreira, Davide Eynard, Fahd Husain
Recommender Systems today are still mostly evaluated in terms of accuracy, with other aspects beyond the immediate relevance of recommendations, such as diversity, long-term user retention and fairness, often taking a back seat.
no code implementations • 13 Oct 2022 • Giuseppe Attanasio, Debora Nozza, Federico Bianchi, Dirk Hovy
Consequently, we should continuously update our models with new data to expose them to new events and facts.
1 code implementation • 2 Aug 2022 • Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza
With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora.
1 code implementation • 12 Jul 2022 • Jacopo Tagliabue, Federico Bianchi, Tobias Schnabel, Giuseppe Attanasio, Ciro Greco, Gabriel de Souza P. Moreira, Patrick John Chia
Much of the complexity of Recommender Systems (RSs) comes from the fact that they are used as part of more complex applications and affect user experience through a varied range of user interfaces.
1 code implementation • Scientific Reports 2022 • Patrick John Chia, Giuseppe Attanasio, Federico Bianchi, Silvia Terragni, Ana Rita Magalhães, Diogo Goncalves, Ciro Greco, Jacopo Tagliabue
The steady rise of online shopping goes hand in hand with the development of increasingly complex ML and NLP models.
1 code implementation • Findings (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Dirk Hovy, Elena Baralis
EAR also reveals overfitting terms, i. e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.
1 code implementation • 19 Aug 2021 • Federico Bianchi, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti, Sri Lakshmi
CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal model that jointly learns representations of images and texts.