Explainable Artificial Intelligence (XAI)

215 papers with code • 0 benchmarks • 2 datasets

Explainable Artificial Intelligence

Libraries

Use these libraries to find Explainable Artificial Intelligence (XAI) models and implementations

Most implemented papers

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

x-y-zhao/BayLime 5 Dec 2020

Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research.

LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

intelligence-csd-auth-gr/LionLearn 13 Apr 2021

Artificial Intelligence (AI) has a tremendous impact on the unexpected growth of technology in almost every aspect.

Revealing drivers and risks for power grid frequency stability with explainable AI

record/5497609 7 Jun 2021

Stable operation of the electrical power system requires the power grid frequency to stay within strict operational limits.

Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction

biomed-AI/MolRep 1 Jul 2021

Advances in machine learning have led to graph neural network-based methods for drug discovery, yielding promising results in molecular design, chemical synthesis planning, and molecular property prediction.

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

ericotjo001/explainable_ai 30 Dec 2021

This paper quantifies the quality of heatmap-based eXplainable AI (XAI) methods w. r. t image classification problem.

GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

interpretml/interpret 19 Apr 2022

The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models.

OmniXAI: A Library for Explainable AI

salesforce/omnixai 1 Jun 2022

We introduce OmniXAI (short for Omni eXplainable AI), an open-source Python library of eXplainable AI (XAI), which offers omni-way explainable AI capabilities and various interpretable machine learning techniques to address the pain points of understanding and interpreting the decisions made by machine learning (ML) in practice.

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

rachtibat/zennit-crp 7 Jun 2022

In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions.

OpenXAI: Towards a Transparent Evaluation of Model Explanations

ai4life-group/openxai 22 Jun 2022

OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets.

"Even if ..." -- Diverse Semifactual Explanations of Reject

andreartelt/diversesemifactualsreject 5 Jul 2022

In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet.