no code implementations • 27 Feb 2024 • Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, Debora Nozza
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.
no code implementations • 9 Feb 2024 • Clara Punzi, Roberto Pellungrini, Mattia Setzu, Fosca Giannotti, Dino Pedreschi
Everyday we increasingly rely on machine learning models to automate and support high-stake tasks and decisions.
1 code implementation • 4 Dec 2023 • Mattia Setzu, Salvatore Ruggieri
Decision Trees are accessible, interpretable, and well-performing classification models.
1 code implementation • 3 Nov 2023 • Mattia Setzu, Silvia Corbara, Anna Monreale, Alejandro Moreo, Fabrizio Sebastiani
While a substantial amount of work has recently been devoted to enhance the performance of computational Authorship Identification (AId) systems, little to no attention has been paid to endowing AId systems with the ability to explain the reasons behind their predictions.
no code implementations • 25 Oct 2023 • Nafis Irtiza Tripto, Adaku Uchendu, Thai Le, Mattia Setzu, Fosca Giannotti, Dongwon Lee
Thus, we introduce the largest benchmark for spoken texts - HANSEN (Human ANd ai Spoken tExt beNchmark).
1 code implementation • 19 Jan 2021 • Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti
Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other.