Explainable Artificial Intelligence (XAI)
283 papers with code • 1 benchmarks • 5 datasets
Explainable Artificial Intelligence
Libraries
Use these libraries to find Explainable Artificial Intelligence (XAI) models and implementationsMost implemented papers
RISE: Randomized Input Sampling for Explanation of Black-box Models
We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments.
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.
Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective
In recent years, XAI researchers have been formalizing proposals and developing new methods to explain black box models, with no general consensus in the community on which method to use to explain these models, with this choice being almost directly linked to the popularity of a specific method.
Contrastive Explanations with Local Foil Trees
Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks.
Do Not Trust Additive Explanations
Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP
While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training.
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.