40 papers with code • 5 benchmarks • 8 datasets
These leaderboards are used to track progress in Explanation Generation
LibrariesUse these libraries to find Explanation Generation models and implementations
Most implemented papers
AR-BERT: Aspect-relation enhanced Aspect-level Sentiment Classification with Multi-modal Explanations
We propose AR-BERT, a novel two-level global-local entity embedding scheme that allows efficient joint training of KG-based aspect embeddings and ALSC models.
TE2Rules: Extracting Rule Lists from Tree Ensembles
Tree Ensemble (TE) models (e. g. Gradient Boosted Trees and Random Forests) often provide higher prediction performance compared to single decision trees.
Explaining Patterns in Data with Language Models via Interpretable Autoprompting
Large language models (LLMs) have displayed an impressive ability to harness natural language to perform complex tasks.
Explaining black box text modules in natural language with language models
Here, we ask whether we can automatically obtain natural language explanations for black box text modules.
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation
The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions.
QED: A Framework and Dataset for Explanations in Question Answering
A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
We provide code for the experiments in this paper at https://github. com/peterbhase/LAS-NL-Explanations
Explainable Automated Fact-Checking for Public Health Claims
We present the first study of explainable fact-checking for claims which require specific expertise.
Elaborative Simplification: Content Addition and Explanation Generation in Text Simplification
Much of modern-day text simplification research focuses on sentence-level simplification, transforming original, more complex sentences into simplified versions.
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables
In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.