Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations

14 Dec 2023  ·  Peiyi Wang, Lei LI, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, Zhifang Sui ·

In this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions. The training of Math-Shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-Shepherd in two scenarios: 1) \textit{Verification}: Math-Shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) \textit{Reinforcement Learning}: Math-Shepherd is employed to reinforce LLMs with step-by-step Proximal Policy Optimization (PPO). With Math-Shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, the step-by-step PPO with Math-Shepherd significantly improves the accuracy of Mistral-7B (77.9\%$\to$84.1\% on GSM8K and 28.6\%$\to$33.0\% on MATH). The accuracy can be further enhanced to 89.1\% and 43.5\% on GSM8K and MATH with the verification of Math-Shepherd, respectively. We believe that automatic process supervision holds significant potential for the future evolution of LLMs.

PDF Abstract

Results from the Paper


Ranked #13 on Arithmetic Reasoning on GSM8K (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Arithmetic Reasoning GSM8K Shepherd + DeepSeek-67B (SFT on MetaMATH + PRM rerank, k=256) Accuracy 93.3 # 13
Parameters (Billion) 67 # 85
Arithmetic Reasoning GSM8K Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) Accuracy 84.1 # 47
Parameters (Billion) 7 # 10
Arithmetic Reasoning GSM8K Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) Accuracy 89.1 # 22
Parameters (Billion) 7 # 10
Math Word Problem Solving MATH Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) Accuracy 33.0 # 58
Parameters (Billions) 7 # 58
Math Word Problem Solving MATH Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) Accuracy 43.5 # 46
Parameters (Billions) 7 # 58
Math Word Problem Solving MATH Shepherd + DeepSeek-67B (SFT on MetaMATH + PRM rerank, k=256) Accuracy 48.1 # 33
Parameters (Billions) 67 # 19

Methods