EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers

ACL 2022  Â·  Bugeun Kim, Kyung Seo Ki, Sangkyu Rhim, Gahgene Gweon ·

In this paper, we propose a neural model EPT-X (Expression-Pointer Transformer with Explanations), which utilizes natural language explanations to solve an algebraic word problem. To enhance the explainability of the encoding process of a neural model, EPT-X adopts the concepts of plausibility and faithfulness which are drawn from math word problem solving strategies by humans. A plausible explanation is one that includes contextual information for the numbers and variables that appear in a given math word problem. A faithful explanation is one that accurately represents the reasoning process behind the model’s solution equation. The EPT-X model yields an average baseline performance of 69.59% on our PEN dataset and produces explanations with quality that is comparable to human output. The contribution of this work is two-fold. (1) EPT-X model: An explainable neural model that sets a baseline for algebraic word problem solving task, in terms of model’s correctness, plausibility, and faithfulness. (2) New dataset: We release a novel dataset PEN (Problems with Explanations for Numbers), which expands the existing datasets by attaching explanations to each number/variable.

PDF Abstract

Datasets


Introduced in the Paper:

PEN

Used in the Paper:

MAWPS ALG514 DRAW-1K

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Math Word Problem Solving ALG514 EPT-X Accuracy (%) 67.07 # 9
Math Word Problem Solving ALG514 EPT Accuracy (%) 73.91 # 6
Math Word Problem Solving DRAW-1K EPT-X Accuracy (%) 56 # 5
Math Word Problem Solving DRAW-1K EPT Accuracy (%) 63.5 # 1
Math Word Problem Solving MAWPS EPT Accuracy (%) 88.7 # 8
Math Word Problem Solving MAWPS EPT-X Accuracy (%) 84.57 # 12
Math Word Problem Solving PEN EPT-X Accuracy (%) 69.59 # 1

Methods


No methods listed for this paper. Add relevant methods here