Math Word Problem Solving
64 papers with code • 11 benchmarks • 17 datasets
A math word problem is a mathematical exercise (such as in a textbook, worksheet, or exam) where significant background information on the problem is presented in ordinary language rather than in mathematical notation. As most word problems involve a narrative of some sort, they are sometimes referred to as story problems and may vary in the amount of technical language used.
Libraries
Use these libraries to find Math Word Problem Solving models and implementationsMost implemented papers
LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning
To address this issue and make a step towards interpretable MWP solving, we first construct a high-quality MWP dataset named InterMWP which consists of 11, 495 MWPs and annotates interpretable logical formulas based on algebraic knowledge as the grounded linguistic logic of each solution equation.
Sparks of Artificial General Intelligence: Early experiments with GPT-4
We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models.
RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning
Recent developments in large pre-trained language models have enabled unprecedented performance on a variety of downstream tasks.
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions.
Translating a Math Word Problem to an Expression Tree
Moreover, we analyze the performance of three popular SEQ2SEQ models on the math word problem solving.
Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions
Several deep learning models have been proposed for solving math word problems (MWPs) automatically.
A Goal-Driven Tree-Structured Neural Model for Math Word Problems
Most existing neural models for math word problems exploit Seq2Seq model to generate solution expressions sequentially from left to right, whose results are far from satisfactory due to the lack of goal-driven mechanism commonly seen in human problem solving.
Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem
In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem.
Graph-to-Tree Learning for Solving Math Word Problems
While the recent tree-based neural models have demonstrated promising results in generating solution expression for the math word problem (MWP), most of these models do not capture the relationships and order information among the quantities well.