no code implementations • 20 Mar 2025 • Stefano Fioravanti, Francesco Giannini, Paolo Frazzetto, Fabio Zanasi, Pietro Barbiero
The most common methods in explainable artificial intelligence are post-hoc techniques which identify the most relevant features used by pretrained opaque models.
no code implementations • 20 Mar 2025 • Andrea Pugnana, Riccardo Massidda, Francesco Giannini, Pietro Barbiero, Mateo Espinosa Zarlenga, Roberto Pellungrini, Gabriele Dominici, Fosca Giannotti, Davide Bacciu
However, when intervened on, CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions.
no code implementations • 17 Feb 2025 • Pietro Barbiero, Giuseppe Marra, Gabriele Ciravegna, David Debot, Francesco De Santis, Michelangelo Diligenti, Mateo Espinosa Zarlenga, Francesco Giannini
We formalize a novel modeling framework for achieving interpretability in deep learning, anchored in the principle of inference equivariance.
no code implementations • 7 Jan 2025 • Mohan Li, Martin Gjoreski, Pietro Barbiero, Gašper Slapničar, Mitja Luštrek, Nicholas D. Lane, Marc Langheinrich
However, its reliance on detailed and often privacy-sensitive data as the basis for its machine learning (ML) models raises significant legal and ethical concerns.
no code implementations • 19 Sep 2024 • Aurora Spagnol, Kacper Sokol, Pietro Barbiero, Marc Langheinrich, Martin Gjoreski
While many explainable artificial intelligence techniques exist for supervised machine learning, unsupervised learning -- and clustering in particular -- has been largely neglected.
1 code implementation • 22 Jul 2024 • David Debot, Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, Michelangelo Diligenti, Giuseppe Marra
The presence of an explicit memory and the symbolic evaluation allow domain experts to inspect and formally verify the validity of certain global properties of interest for the task prediction process.
no code implementations • 20 Jun 2024 • Francesco De Santis, Philippe Bich, Gabriele Ciravegna, Pietro Barbiero, Danilo Giordano, Tania Cerquitelli
Additionally, we show that our models are (i) interpretable, offering meaningful logical explanations for their predictions; (ii) interactable, allowing humans to modify intermediate predictions through concept interventions; and (iii) controllable, guiding the LLMs' decoding process to follow a required decision-making path.
1 code implementation • 26 May 2024 • Gabriele Dominici, Pietro Barbiero, Mateo Espinosa Zarlenga, Alberto Termine, Martin Gjoreski, Giuseppe Marra, Marc Langheinrich
Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying the decisions of deep neural network (DNN) models.
no code implementations • 26 May 2024 • Gabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Marc Langhenirich
Interpretable deep learning aims at developing neural architectures whose decision-making processes could be understood by their users.
1 code implementation • 24 May 2024 • Dario Fenoglio, Gabriele Dominici, Pietro Barbiero, Alberto Tonda, Martin Gjoreski, Marc Langheinrich
Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks.
1 code implementation • 2 Feb 2024 • Gabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Giuseppe Marra, Marc Langheinrich
Current deep learning models are not designed to simultaneously address three fundamental questions: predict class labels to solve a given classification task (the "What?
no code implementations • 4 Dec 2023 • Alessandro Farace di Villaforesta, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò
To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks.
1 code implementation • 25 Nov 2023 • Jonas Jürß, Lucie Charlotte Magister, Pietro Barbiero, Pietro Liò, Nikola Simidjievski
A line of interpretable methods approach this by discovering a small set of relevant concepts as subgraphs in the last GNN layer that together explain the prediction.
1 code implementation • 11 Nov 2023 • Donato Crisostomi, Irene Cannistraci, Luca Moschella, Pietro Barbiero, Marco Ciccone, Pietro Liò, Emanuele Rodolà
Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces.
1 code implementation • 23 Aug 2023 • Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, Michelangelo Diligenti, Giuseppe Marra
The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs.
1 code implementation • 1 Jul 2023 • Gabriele Dominici, Pietro Barbiero, Lucie Charlotte Magister, Pietro Liò, Nikola Simidjievski
Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task.
no code implementations • 27 Apr 2023 • Pietro Barbiero, Stefano Fioravanti, Francesco Giannini, Alberto Tonda, Pietro Lio, Elena Di Lavore
Explainable AI (XAI) aims to address the human need for safe and reliable AI systems.
1 code implementation • 27 Apr 2023 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio', Frederic Precioso, Mateja Jamnik, Giuseppe Marra
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust.
1 code implementation • 9 Feb 2023 • Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio
Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.
Explainable Artificial Intelligence (XAI)
Molecular Property Prediction
+2
1 code implementation • 25 Jan 2023 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik
In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.
2 code implementations • 4 Nov 2022 • Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, Pietro Lio
Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions.
1 code implementation • 13 Oct 2022 • Steve Azzolin, Antonio Longa, Pietro Barbiero, Pietro Liò, Andrea Passerini
While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging.
1 code implementation • 19 Sep 2022 • Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy.
1 code implementation • 22 Aug 2022 • Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte Magister, Pietro Lió
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model.
no code implementations • 27 Jul 2022 • Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Lio
The opaque reasoning of Graph Neural Networks induces a lack of human trust.
no code implementations • 29 Sep 2021 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik
Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.
2 code implementations • 11 Aug 2021 • Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Lió, Marco Maggini, Stefano Melacci
The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience.
1 code implementation • 15 Jul 2021 • Dobrik Georgiev, Pietro Barbiero, Dmitry Kazhdan, Petar Veličković, Pietro Liò
Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems.
3 code implementations • 12 Jun 2021 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, Stefano Melacci
Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.
Ranked #1 on
Image Classification
on CUB
1 code implementation • 25 May 2021 • Pietro Barbiero, Gabriele Ciravegna, Dobrik Georgiev, Franscesco Giannini
"PyTorch, Explain!"
1 code implementation • 17 Sep 2020 • Pietro Barbiero, Ramon Viñas Torné, Pietro Lió
Objective: Modern medicine needs to shift from a wait and react, curative discipline to a preventative, interdisciplinary science aiming at providing personalised, systemic and precise treatment plans to patients.
no code implementations • 6 Sep 2020 • Giansalvo Cirrincione, Pietro Barbiero, Gabriele Ciravegna, Vincenzo Randazzo
The former is just an adaptation of a standard competitive layer for deep clustering, while the latter is trained on the transposed matrix.
1 code implementation • 21 Aug 2020 • Pietro Barbiero, Gabriele Ciravegna, Vincenzo Randazzo, Giansalvo Cirrincione
The aim of this work is to present a novel comprehensive theory aspiring at bridging competitive learning with gradient-based learning, thus allowing the use of extremely powerful deep neural networks for feature extraction and projection combined with the remarkable flexibility and expressiveness of competitive learning.
1 code implementation • 28 Jun 2020 • Pietro Barbiero, Giovanni Squillero, Alberto Tonda
As machine learning becomes more and more available to the general public, theoretical questions are turning into pressing practical issues.
1 code implementation • Artificial Evolution 2020 • Pietro Barbiero, Evelyne Lutton, Giovanni Squillero, Alberto Tonda
We thus propose a multi-objective optimization approach to feature selection, EvoFS, with the objectives to i. minimize feature subset size, ii.
1 code implementation • 20 Feb 2020 • Pietro Barbiero, Giovanni Squillero, Alberto Tonda
A coreset is a subset of the training set, using which a machine learning algorithm obtains performances similar to what it would deliver if trained over the whole original data.
Ranked #1 on
Core set discovery
on MNIST
1 code implementation • Neural Networks 2020 • Giansalvo Cirrincione, Gabriele Ciravegna, Pietro Barbiero, Vincenzo Randazzo, Eros Pasero
Furthermore, an important and very promising application of GH-EXIN in two-way hierarchical clustering, for the analysis of gene expression data in the study of the colorectal cancer is described.