1 code implementation • 9 Feb 2023 • Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio
Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.
Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1
no code implementations • 14 Nov 2022 • Shea Cardozo, Gabriel Islas Montero, Dmitry Kazhdan, Botty Dimanov, Maleakhi Wijaya, Mateja Jamnik, Pietro Lio
Recent work has suggested post-hoc explainers might be ineffective for detecting spurious correlations in Deep Neural Networks (DNNs).
1 code implementation • 18 Apr 2021 • Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik
Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.
1 code implementation • 14 Apr 2021 • Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller
Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.
1 code implementation • 13 Dec 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò
Recurrent Neural Networks (RNNs) have achieved remarkable performance on a range of tasks.
1 code implementation • 25 Oct 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller
Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.
no code implementations • 27 Sep 2018 • Botty Dimanov, Mateja Jamnik
In this paper, we introduce a novel method, called step-wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs).