Search Results for author: Botty Dimanov

Found 7 papers, 5 papers with code

GCI: A (G)raph (C)oncept (I)nterpretation Framework

1 code implementation9 Feb 2023 Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio

Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.

Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1

Failing Conceptually: Concept-Based Explanations of Dataset Shift

1 code implementation18 Apr 2021 Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik

Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.

Is Disentanglement all you need? Comparing Concept-based & Disentanglement Approaches

1 code implementation14 Apr 2021 Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller

Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.

Disentanglement

Now You See Me (CME): Concept-based Model Extraction

1 code implementation25 Oct 2020 Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller

Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.

Model extraction

Step-wise Sensitivity Analysis: Identifying Partially Distributed Representations for Interpretable Deep Learning

no code implementations27 Sep 2018 Botty Dimanov, Mateja Jamnik

In this paper, we introduce a novel method, called step-wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.