MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics

We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f, a neural theorem prover based on GPT-3 and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving.

PDF Abstract ICLR 2022 PDF ICLR 2022 Abstract


Introduced in the Paper:


Used in the Paper:

MATH NaturalProofs

Results from the Paper

 Ranked #1 on Automated Theorem Proving on miniF2F-valid (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Automated Theorem Proving miniF2F-test Lean GPT-f Pass@1 24.6 # 7
Pass@8 29.2 # 2
Automated Theorem Proving miniF2F-test Lean tidy Pass@1 18 # 11
Automated Theorem Proving miniF2F-test Metamath GPT-f Pass@1 1.3 # 14
Pass@8 1.6 # 3
Automated Theorem Proving miniF2F-valid Lean GPT-f Pass@8 29.3 # 1
Pass@1 23.9 # 1
Automated Theorem Proving miniF2F-valid Lean tidy Pass@1 16.8 # 2
Automated Theorem Proving miniF2F-valid Metamath GPT-f Pass@8 2 # 2
Pass@1 1 # 3