1 code implementation • nlppower (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy
In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection.
no code implementations • 20 Dec 2023 • Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, Elena Baralis
The field of explainable artificial intelligence emerged in response to the growing need for more transparent and reliable models.
no code implementations • 14 Sep 2023 • Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis
Existing work focuses on a few spoken language understanding (SLU) tasks, and explanations are difficult to interpret for most users.
1 code implementation • 1 Aug 2023 • Alan Perotti, Simone Bertolotto, Eliana Pastor, André Panisson
Finally, we discuss how this approach can be further exploited in terms of explainability and adversarial robustness.
1 code implementation • 14 Jun 2023 • Alkis Koudounas, Moreno La Quatra, Lorenzo Vaiani, Luca Colomba, Giuseppe Attanasio, Eliana Pastor, Luca Cagliero, Elena Baralis
Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects.
1 code implementation • 2 Aug 2022 • Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza
With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora.
no code implementations • 17 Aug 2021 • Eliana Pastor, Luca de Alfaro, Elena Baralis
Furthermore, we quantify the contribution of all attributes in the data subgroup to the divergent behavior by means of Shapley values, thus allowing the identification of the most impacting attributes.