Explanation Generation
61 papers with code • 5 benchmarks • 9 datasets
Benchmarks
These leaderboards are used to track progress in Explanation Generation
Libraries
Use these libraries to find Explanation Generation models and implementationsDatasets
Most implemented papers
Explainable Agency by Revealing Suboptimality in Child-Robot Learning Scenarios
In the application scenario, the child and the robot learn together how to play a zero-sum game that requires logical and mathematical thinking.
LIREx: Augmenting Language Inference with Relevant Explanation
Natural language explanations (NLEs) are a special form of data annotation in which annotators identify rationales (most significant text tokens) when assigning labels to data instances, and write out explanations for the labels in natural language based on the rationales.
Explain and Predict, and then Predict Again
A desirable property of learning systems is to be both effective and interpretable.
Faithfully Explainable Recommendation via Neural Logic Reasoning
Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process.
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness.
Generating High-Quality Explanations for Navigation in Partially-Revealed Environments
We present an approach for generating natural language explanations of high-level behavior of autonomous agents navigating in partially-revealed environments.
An Information Retrieval Approach to Building Datasets for Hate Speech Detection
Our key insight is that the rarity and subjectivity of hate speech are akin to that of relevance in information retrieval (IR).
Explainable Debugger for Black-box Machine Learning Models
In this paper, we propose a systematic debugging framework for the development of ML models that guides the data engineering process using the model's decision boundary.
Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model
Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model.
A Framework for Learning Ante-hoc Explainable Models via Concepts
To the best of our knowledge, we are the first ante-hoc explanation generation method to show results with a large-scale dataset such as ImageNet.