Explainable artificial intelligence
136 papers with code • 0 benchmarks • 8 datasets
XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.
Benchmarks
These leaderboards are used to track progress in Explainable artificial intelligence
Libraries
Use these libraries to find Explainable artificial intelligence models and implementationsMost implemented papers
Axiomatic Attribution for Deep Networks
We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works.
GNNExplainer: Generating Explanations for Graph Neural Networks
We formulate GNNExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures.
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.
Entropy-based Logic Explanations of Neural Networks
Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.
SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines
Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances.
Do Not Trust Additive Explanations
Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.
Explaining How Deep Neural Networks Forget by Deep Visualization
Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life.
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward.