21 papers with code • 1 benchmarks • 3 datasets
AR-BERT: Aspect-relation enhanced Aspect-level Sentiment Classification with Multi-modal Explanations
We propose AR-BERT, a novel two-level global-local entity embedding scheme that allows efficient joint training of KG-based aspect embeddings and ALSC models.
The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions.
A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
We provide code for the experiments in this paper at https://github. com/peterbhase/LAS-NL-Explanations
Much of modern-day text simplification research focuses on sentence-level simplification, transforming original, more complex sentences into simplified versions.
In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
In the application scenario, the child and the robot learn together how to play a zero-sum game that requires logical and mathematical thinking.
Natural language explanations (NLEs) are a special form of data annotation in which annotators identify rationales (most significant text tokens) when assigning labels to data instances, and write out explanations for the labels in natural language based on the rationales.