Prediction or Comparison: Toward Interpretable Qualitative Reasoning

Findings (ACL) 2021  ·  Mucheng Ren, Heyan Huang, Yang Gao ·

Qualitative relationships illustrate how changing one property (e.g., moving velocity) affects another (e.g., kinetic energy) and constitutes a considerable portion of textual knowledge. Current approaches use either semantic parsers to transform natural language inputs into logical expressions or a "black-box" model to solve them in one step. The former has a limited application range, while the latter lacks interpretability. In this work, we categorize qualitative reasoning tasks into two types: prediction and comparison. In particular, we adopt neural network modules trained in an end-to-end manner to simulate the two reasoning processes. Experiments on two qualitative reasoning question answering datasets, QuaRTz and QuaRel, show our methods' effectiveness and generalization capability, and the intermediate outputs provided by the modules make the reasoning process interpretable.

PDF Abstract Findings (ACL) 2021 PDF Findings (ACL) 2021 Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here