Explainable artificial intelligence

88 papers with code • 0 benchmarks • 7 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Most implemented papers

Axiomatic Attribution for Deep Networks

ankurtaly/Attributions ICML 2017

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works.

GNNExplainer: Generating Explanations for Graph Neural Networks

RexYing/gnn-model-explainer NeurIPS 2019

We formulate GNNExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures.

Proposed Guidelines for the Responsible Use of Explainable Machine Learning

jphall663/kdd_2019 8 Jun 2019

Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

chr5tphr/zennit 24 Jun 2021

Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.

SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

Noahs-ARK/soft_patterns 15 May 2018

Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances.

Do Not Trust Additive Explanations

ModelOriented/iBreakDown 27 Mar 2019

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

On the Explanation of Machine Learning Predictions in Clinical Gait Analysis

sebastian-lapuschkin/explaining-deep-clinical-gait-classification 16 Dec 2019

Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.

EUCA: the End-User-Centered Explainable AI Framework

weinajin/end-user-xai 4 Feb 2021

The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support.

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

DariusAf/functional_attribution 26 Apr 2021

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction.

Entropy-based Logic Explanations of Neural Networks

pietrobarbiero/pytorch_explain 12 Jun 2021

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.