Analysing Mathematical Reasoning Abilities of Neural Models

Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract

Datasets


Introduced in the Paper:

Mathematics Dataset
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering Mathematics Dataset LSTM Accuracy 0.57 # 3
Question Answering Mathematics Dataset Transformer Accuracy 0.76 # 2

Methods


No methods listed for this paper. Add relevant methods here