1 code implementation • 9 Feb 2023 • Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio
Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.
Explainable Artificial Intelligence (XAI)
Molecular Property Prediction
+2
1 code implementation • 25 Jan 2023 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik
In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.
no code implementations • 14 Nov 2022 • Shea Cardozo, Gabriel Islas Montero, Dmitry Kazhdan, Botty Dimanov, Maleakhi Wijaya, Mateja Jamnik, Pietro Lio
Recent work has suggested post-hoc explainers might be ineffective for detecting spurious correlations in Deep Neural Networks (DNNs).
no code implementations • 27 Jul 2022 • Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Lio
The opaque reasoning of Graph Neural Networks induces a lack of human trust.
no code implementations • 29 Sep 2021 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik
Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.
no code implementations • 25 Jul 2021 • Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, Pietro Liò
Motivated by the aim of providing global explanations, we adapt the well-known Automated Concept-based Explanation approach (Ghorbani et al., 2019) to GNN node and graph classification, and propose GCExplainer.
1 code implementation • 15 Jul 2021 • Dobrik Georgiev, Pietro Barbiero, Dmitry Kazhdan, Petar Veličković, Pietro Liò
Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems.
1 code implementation • NeurIPS 2021 • Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross Anderson
Machine learning is vulnerable to a wide variety of attacks.
1 code implementation • 18 Apr 2021 • Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik
Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.
1 code implementation • 14 Apr 2021 • Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller
Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.
1 code implementation • 13 Dec 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò
Recurrent Neural Networks (RNNs) have achieved remarkable performance on a range of tasks.
1 code implementation • 25 Oct 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller
Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.
1 code implementation • 16 Apr 2020 • Dmitry Kazhdan, Zohreh Shams, Pietro Liò
Multi-Agent Reinforcement Learning (MARL) encompasses a powerful class of methodologies that have been applied in a wide range of fields.