Explainable artificial intelligence

202 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Most implemented papers

Axiomatic Attribution for Deep Networks

ankurtaly/Attributions ICML 2017

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works.

GNNExplainer: Generating Explanations for Graph Neural Networks

RexYing/gnn-model-explainer NeurIPS 2019

We formulate GNNExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures.

Proposed Guidelines for the Responsible Use of Explainable Machine Learning

jphall663/kdd_2019 8 Jun 2019

Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.

Entropy-based Logic Explanations of Neural Networks

pietrobarbiero/pytorch_explain 12 Jun 2021

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

chr5tphr/zennit 24 Jun 2021

Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.

MEGAN: Multi-Explanation Graph Attention Network

aimat-lab/gcnn_keras 23 Nov 2022

Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels, the number of which is independent of task specifications.

SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

Noahs-ARK/soft_patterns 15 May 2018

Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances.

AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark

soerenab/AudioMNIST 9 Jul 2018

Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.

Do Not Trust Additive Explanations

ModelOriented/iBreakDown 27 Mar 2019

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

On the Explanation of Machine Learning Predictions in Clinical Gait Analysis

sebastian-lapuschkin/explaining-deep-clinical-gait-classification 16 Dec 2019

Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.