Explanation Generation
85 papers with code • 5 benchmarks • 9 datasets
Libraries
Use these libraries to find Explanation Generation models and implementationsMost implemented papers
Explainable Automated Fact-Checking for Public Health Claims
We present the first study of explainable fact-checking for claims which require specific expertise.
AR-BERT: Aspect-relation enhanced Aspect-level Sentiment Classification with Multi-modal Explanations
We propose AR-BERT, a novel two-level global-local entity embedding scheme that allows efficient joint training of KG-based aspect embeddings and ALSC models.
TE2Rules: Explaining Tree Ensembles using Rules
Tree Ensemble (TE) models, such as Gradient Boosted Trees, often achieve optimal performance on tabular datasets, yet their lack of transparency poses challenges for comprehending their decision logic.
Explaining Patterns in Data with Language Models via Interpretable Autoprompting
Large language models (LLMs) have displayed an impressive ability to harness natural language to perform complex tasks.
Explaining black box text modules in natural language with language models
Here, we ask whether we can automatically obtain natural language explanations for black box text modules.
MACRec: a Multi-Agent Collaboration Framework for Recommendation
LLM-based agents have gained considerable attention for their decision-making skills and ability to handle complex tasks.
Using Stratified Sampling to Improve LIME Image Explanations
We investigate the use of a stratified sampling approach for LIME Image, a popular model-agnostic explainable AI method for computer vision tasks, in order to reduce the artifacts generated by typical Monte Carlo sampling.
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation
The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions.
QED: A Framework and Dataset for Explanations in Question Answering
A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
We provide code for the experiments in this paper at https://github. com/peterbhase/LAS-NL-Explanations