Explainable artificial intelligence

136 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Most implemented papers

Axiomatic Attribution for Deep Networks

ankurtaly/Attributions ICML 2017

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works.

GNNExplainer: Generating Explanations for Graph Neural Networks

RexYing/gnn-model-explainer NeurIPS 2019

We formulate GNNExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures.

Proposed Guidelines for the Responsible Use of Explainable Machine Learning

jphall663/kdd_2019 8 Jun 2019

Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.

Entropy-based Logic Explanations of Neural Networks

pietrobarbiero/pytorch_explain 12 Jun 2021

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

chr5tphr/zennit 24 Jun 2021

Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.

SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

Noahs-ARK/soft_patterns 15 May 2018

Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances.

Do Not Trust Additive Explanations

ModelOriented/iBreakDown 27 Mar 2019

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

On the Explanation of Machine Learning Predictions in Clinical Gait Analysis

sebastian-lapuschkin/explaining-deep-clinical-gait-classification 16 Dec 2019

Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.

Explaining How Deep Neural Networks Forget by Deep Visualization

giangnguyen2412/dissect_catastrophic_forgetting 3 May 2020

Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life.

Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset

etjoa003/explainable_ai 7 Sep 2020

Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward.