1 code implementation • nlppower (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy
In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection.
1 code implementation • 26 May 2025 • Alkis Koudounas, Moreno La Quatra, Eliana Pastor, Sabato Marco Siniscalchi, Elena Baralis
Kolmogorov-Arnold Networks (KANs) have recently emerged as a promising alternative to traditional neural architectures, yet their application to speech processing remains under explored.
1 code implementation • 26 Aug 2024 • Flavio Giobergia, Eliana Pastor, Luca de Alfaro, Elena Baralis
Concept drift is a common phenomenon in data streams where the statistical properties of the target variable change over time.
no code implementations • 26 Aug 2024 • Flavio Giobergia, Eliana Pastor, Luca de Alfaro, Elena Baralis
The ability to detect and adapt to changes in data distributions is crucial to maintain the accuracy and reliability of machine learning models.
1 code implementation • 13 Aug 2024 • Daniele Rege Cambrin, Eleonora Poeta, Eliana Pastor, Tania Cerquitelli, Elena Baralis, Paolo Garza
This paper analyzes the integration of KAN layers into the U-Net architecture (U-KAN) to segment crop fields using Sentinel-2 and Sentinel-1 satellite images and provides an analysis of the performance and explainability of these networks.
1 code implementation • 20 Jun 2024 • Alkis Koudounas, Flavio Giobergia, Eliana Pastor, Elena Baralis
Speech models may be affected by performance imbalance in different population subgroups, raising concerns about fair treatment across these groups.
1 code implementation • 20 Jun 2024 • Eleonora Poeta, Flavio Giobergia, Eliana Pastor, Tania Cerquitelli, Elena Baralis
Kolmogorov-Arnold Networks (KANs) have very recently been introduced into the world of machine learning, quickly capturing the attention of the entire community.
no code implementations • 20 Dec 2023 • Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, Elena Baralis
The field of explainable artificial intelligence emerged in response to the growing need for more transparent and reliable models.
1 code implementation • 14 Sep 2023 • Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis
Existing work focuses on a few spoken language understanding (SLU) tasks, and explanations are difficult to interpret for most users.
1 code implementation • 1 Aug 2023 • Alan Perotti, Simone Bertolotto, Eliana Pastor, André Panisson
Finally, we discuss how this approach can be further exploited in terms of explainability and adversarial robustness.
1 code implementation • 14 Jun 2023 • Alkis Koudounas, Moreno La Quatra, Lorenzo Vaiani, Luca Colomba, Giuseppe Attanasio, Eliana Pastor, Luca Cagliero, Elena Baralis
Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects.
2 code implementations • 2 Aug 2022 • Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza
With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora.
no code implementations • 17 Aug 2021 • Eliana Pastor, Luca de Alfaro, Elena Baralis
Furthermore, we quantify the contribution of all attributes in the data subgroup to the divergent behavior by means of Shapley values, thus allowing the identification of the most impacting attributes.