1 code implementation • 24 Mar 2025 • Daphne Lenders, Andrea Pugnana, Roberto Pellungrini, Toon Calders, Dino Pedreschi, Fosca Giannotti
In this setting, often fairness concerns arise when the abstention mechanism solely reduces errors for the majority groups of the data, resulting in increased performance differences across demographic groups.
no code implementations • 20 Mar 2025 • Andrea Pugnana, Riccardo Massidda, Francesco Giannini, Pietro Barbiero, Mateo Espinosa Zarlenga, Roberto Pellungrini, Gabriele Dominici, Fosca Giannotti, Davide Bacciu
However, when intervened on, CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions.
no code implementations • 3 Mar 2025 • Jacopo Joy Colombini, Filippo Bonchi, Francesco Giannini, Fosca Giannotti, Roberto Pellungrini, Patrizio Frosini
This paper introduces a rigorous mathematical framework for neural network explainability, and more broadly for the explainability of equivariant operators called Group Equivariant Operators (GEOs) based on Group Equivariant Non-Expansive Operators (GENEOs) transformations.
no code implementations • 1 Mar 2025 • Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti, Tommaso Cucinotta
Prior studies show that adopting the annotation diversity shaped by different backgrounds and life experiences and incorporating them into the model learning, i. e. multi-perspective approach, contribute to the development of more responsible models.
1 code implementation • 13 Nov 2024 • Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti
Subjective NLP tasks usually rely on human annotations provided by multiple annotators, whose judgments may vary due to their diverse backgrounds and life experiences.
no code implementations • 16 Oct 2024 • Daniele Gambetta, Gizem Gezici, Fosca Giannotti, Dino Pedreschi, Alistair Knott, Luca Pappalardo
As synthetic content increasingly infiltrates the web, generative AI models may experience an autophagy process, where they are fine-tuned using their own outputs.
no code implementations • 29 Jun 2024 • Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Giulio Rossetti, Gizem Gezici, Fosca Giannotti, Margherita Lalli, Daniele Gambetta, Giovanni Mauro, Virginia Morini, Valentina Pansanella, Dino Pedreschi
Recommendation systems and assistants (in short, recommenders) are ubiquitous in online platforms and influence most actions of our day-to-day lives, suggesting items or providing solutions based on users' preferences or requests.
no code implementations • 9 Feb 2024 • Clara Punzi, Roberto Pellungrini, Mattia Setzu, Fosca Giannotti, Dino Pedreschi
Everyday we increasingly rely on machine learning models to automate and support high-stake tasks and decisions.
no code implementations • 25 Oct 2023 • Nafis Irtiza Tripto, Adaku Uchendu, Thai Le, Mattia Setzu, Fosca Giannotti, Dongwon Lee
Thus, we introduce the largest benchmark for spoken texts - HANSEN (Human ANd ai Spoken tExt beNchmark).
1 code implementation • World Conference on Explainable Artificial Intelligence 2023 • Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, Andrea Mattei
Missing data are quite common in real scenarios when using Artificial Intelligence (AI) systems for decision-making with tabular data and effectively handling them poses a significant challenge for such systems.
no code implementations • 23 Jun 2023 • Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-Laszlo Barabasi, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, Janos Kertesz, Alistair Knott, Yannis Ioannidis, Paul Lukowicz, Andrea Passarella, Alex Sandy Pentland, John Shawe-Taylor, Alessandro Vespignani
Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature.
1 code implementation • 18 Jan 2023 • Martina Cinquini, Fosca Giannotti, Riccardo Guidotti
However, typically, the variables of a dataset depend on one another, and these dependencies are not considered in data generation leading to the creation of implausible records.
no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi
We consider dense, associative neural-networks trained by a teacher (i. e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.
no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi
We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.
1 code implementation • 21 Oct 2022 • Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Fosca Giannotti
Due to the growing attention in graph learning, we focus on the concepts of CE for GNNs.
no code implementations • 22 Nov 2021 • Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo, Fosca Giannotti
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems.
1 code implementation • 1 Jun 2021 • Vasiliki Voukelatou, Ioanna Miliou, Fosca Giannotti, Luca Pappalardo
Thus, its measurement has drawn the attention of researchers, policymakers, and peacekeepers.
1 code implementation • 25 Feb 2021 • Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo
The widespread adoption of black-box models in Artificial Intelligence has enhanced the need for explanation methods to reveal how these obscure models reach specific decisions.
no code implementations • 22 Feb 2021 • Jisu Kim, Alina Sîrbu, Giulio Rossetti, Fosca Giannotti, Hillel Rapoport
In particular, the common language between home and destination countries corresponds to increased home attachment, as does low proficiency in the host language.
1 code implementation • 19 Jan 2021 • Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti
Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other.
1 code implementation • 8 Dec 2020 • Ioanna Miliou, Xinyue Xiong, Salvatore Rinzivillo, Qian Zhang, Giulio Rossetti, Fosca Giannotti, Dino Pedreschi, Alessandro Vespignani
In this paper, we propose the use of a novel data source, namely retail market data to improve seasonal influenza forecasting.
no code implementations • 26 Jun 2018 • Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini
We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.
1 code implementation • 28 May 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti
Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.
1 code implementation • 14 Feb 2018 • Luca Pappalardo, Paolo Cintia, Paolo Ferragina, Emanuele Massucco, Dino Pedreschi, Fosca Giannotti
The problem of evaluating the performance of soccer players is attracting the interest of many companies and the scientific community, thanks to the availability of massive data capturing all the events generated during a match (e. g., tackles, passes, shots, etc.).
no code implementations • 6 Feb 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti
The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.
1 code implementation • 15 Dec 2017 • Giulio Rossetti, Letizia Milli, Salvatore Rinzivillo, Alina Sirbu, Fosca Giannotti, Dino Pedreschi
Nowadays the analysis of dynamics of and on networks represents a hot topic in the Social Network Analysis playground.
Social and Information Networks 05C85, 60J60, 90C35 G.2.2; F.2.1
no code implementations • 5 Dec 2017 • Luca Pappalardo, Paolo Cintia, Dino Pedreschi, Fosca Giannotti, Albert-Laszlo Barabasi
Humans are routinely asked to evaluate the performance of other individuals, separating success from failure and affecting outcomes from science to education and sports.