Explanation Generation

21 papers with code • 1 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

AR-BERT: Aspect-relation enhanced Aspect-level Sentiment Classification with Multi-modal Explanations

mainuliitkgp/ar-bert 26 Aug 2021

We propose AR-BERT, a novel two-level global-local entity embedding scheme that allows efficient joint training of KG-based aspect embeddings and ALSC models.

Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation

mdda/worldtree_corpus WS 2019

The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions.

QED: A Framework and Dataset for Explanations in Question Answering

google-research-datasets/QED 8 Sep 2020

A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.

Explainable Automated Fact-Checking for Public Health Claims

neemakot/Health-Fact-Checking EMNLP 2020

We present the first study of explainable fact-checking for claims which require specific expertise.

Elaborative Simplification: Content Addition and Explanation Generation in Text Simplification

nehasrikn/elaborative-simplification Findings (ACL) 2021

Much of modern-day text simplification research focuses on sentence-level simplification, transforming original, more complex sentences into simplified versions.

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables

JamesHujy/ELV NeurIPS 2020

In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.

Explainable Agency by Revealing Suboptimality in Child-Robot Learning Scenarios

Silviatulli/suboptimax 6 Nov 2020

In the application scenario, the child and the robot learn together how to play a zero-sum game that requires logical and mathematical thinking.

LIREx: Augmenting Language Inference with Relevant Explanation

zhaoxy92/LIREx 16 Dec 2020

Natural language explanations (NLEs) are a special form of data annotation in which annotators identify rationales (most significant text tokens) when assigning labels to data instances, and write out explanations for the labels in natural language based on the rationales.

Explain and Predict, and then Predict Again

JoshuaGhost/expred 11 Jan 2021

A desirable property of learning systems is to be both effective and interpretable.