Explainable artificial intelligence
203 papers with code • 0 benchmarks • 8 datasets
XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.
Benchmarks
These leaderboards are used to track progress in Explainable artificial intelligence
Libraries
Use these libraries to find Explainable artificial intelligence models and implementationsLatest papers
Using Explainable AI and Transfer Learning to understand and predict the maintenance of Atlantic blocking with limited observational data
This work demonstrates the potential for machine learning methods to extract meaningful precursors of extreme weather events and achieve better prediction using limited observational data.
Procedural Fairness in Machine Learning
We propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_{FAE}$, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of the ML models.
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering
In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset.
Interpretable Machine Learning for Survival Analysis
With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.
Explainable Learning with Gaussian Processes
When using integrated gradients as an attribution method, we show that the attributions of a GPR model also follow a Gaussian process distribution, which quantifies the uncertainty in attribution arising from uncertainty in the model.
An Ensemble Framework for Explainable Geospatial Machine Learning Models
Analyzing spatial varying effect is pivotal in geographic analysis.
LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks
LangXAI is a framework that integrates Explainable Artificial Intelligence (XAI) with advanced vision models to generate textual explanations for visual recognition tasks.
Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias
To overcome this limitation, we introduce Multi-Excitation Projective Simulation (mePS), a generalization that considers a chain-of-thought to be a random walk of several particles on a hypergraph.
Detecting mental disorder on social media: a ChatGPT-augmented explainable approach
In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection.
NormEnsembleXAI: Unveiling the Strengths and Weaknesses of XAI Ensemble Techniques
This paper presents a comprehensive comparative analysis of explainable artificial intelligence (XAI) ensembling methods.