For the latter, explanation regularization (ER) aims to improve NLM generalization by pushing the machine rationales to align with human rationales.
Free-text rationales aim to explain neural language model (LM) behavior more flexibly and intuitively via natural language.
Plus, little is understood about how ER model performance is affected by the choice of ER criteria or by the number/choice of training instances with human rationales.
An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.
We demonstrate that temporal stroke information recovered by TRACE from offline data can be used for handwriting synthesis and establish the first benchmarks for a stroke trajectory recovery system trained on the IAM online handwriting dataset.
We also show that the class of twisted fractionally Calabi-Yau algebras is closed under derived equivalence, answering a question by Herschend and Iyama.
Representation Theory Rings and Algebras 16G10, 16D50, 16E05, 16E65
Recently, knowledge graph (KG) augmented models have achieved noteworthy success on various commonsense reasoning tasks.
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
We present a model that uses a single first-person image to generate an egocentric basketball motion sequence in the form of a 12D camera configuration trajectory, which encodes a player's 3D location and 3D head orientation throughout the sequence.
This paper presents a novel approach to estimating the continuous six degree of freedom (6-DoF) pose (3D translation and rotation) of an object from a single RGB image.
Ranked #1 on Keypoint Detection on Pascal3D+