Premise Selection for Theorem Proving by Deep Graph Embedding

NeurIPS 2017  ·  Mingzhe Wang, Yihe Tang, Jian Wang, Jia Deng ·

We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.

PDF Abstract NeurIPS 2017 PDF NeurIPS 2017 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Automated Theorem Proving HolStep (Conditional) FormulaNet-basic Classification Accuracy 0.891 # 3
Automated Theorem Proving HolStep (Conditional) FormulaNet Classification Accuracy 0.903 # 2
Automated Theorem Proving HolStep (Unconditional) FormulaNet Classification Accuracy 0.900 # 1
Automated Theorem Proving HolStep (Unconditional) FormulaNet-basic Classification Accuracy 0.890 # 2

Methods


No methods listed for this paper. Add relevant methods here