Are NLP Models really able to Solve Simple Math Word Problems?

The problem of designing NLP solvers for math word problems (MWP) has seen sustained research activity and steady gains in the test accuracy. Since existing solvers achieve high performance on the benchmark datasets for elementary level MWPs containing one-unknown arithmetic word problems, such problems are often considered "solved" with the bulk of research attention moving to more complex MWPs. In this paper, we restrict our attention to English MWPs taught in grades four and lower. We provide strong evidence that the existing MWP solvers rely on shallow heuristics to achieve high performance on the benchmark datasets. To this end, we show that MWP solvers that do not have access to the question asked in the MWP can still solve a large fraction of MWPs. Similarly, models that treat MWPs as bag-of-words can also achieve surprisingly high accuracy. Further, we introduce a challenge dataset, SVAMP, created by applying carefully chosen variations over examples sampled from existing datasets. The best accuracy achieved by state-of-the-art models is substantially lower on SVAMP, thus showing that much remains to be done even for the simplest of the MWPs.

PDF Abstract NAACL 2021 PDF NAACL 2021 Abstract


Introduced in the Paper:


Results from the Paper

 Ranked #1 on Math Word Problem Solving on SVAMP (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Math Word Problem Solving SVAMP GTS with RoBERTa Execution Accuracy 41.0 # 2
Math Word Problem Solving SVAMP LSTM Seq2Seq with RoBERTa Execution Accuracy 40.3 # 3
Math Word Problem Solving SVAMP Transformer with RoBERTa Execution Accuracy 38.9 # 4
Math Word Problem Solving SVAMP Graph2Tree with RoBERTa Execution Accuracy 43.8 # 1