Premise selection is a fundamental problem of automated theorem proving.
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.
Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)
Transformer models yield impressive results on many NLP and sequence modeling tasks.
Ranked #3 on Image Generation on ImageNet 32x32 (bpd metric)
While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks.
We examine whether self-supervised language modeling applied to mathematical formulas enables logical reasoning.
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space.
Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs.
Ranked #3 on Automated Theorem Proving on HOList benchmark
This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain.
Ranked #1 on Automated Theorem Proving on HOList benchmark
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic.
Ranked #2 on Automated Theorem Proving on HOList benchmark
Text embedding representing natural language documents in a semantic vector space can be used for document retrieval using nearest neighbor lookup.
We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving.
Ranked #3 on Automated Theorem Proving on HolStep (Unconditional)
Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search.
We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics.
Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network.
Ranked #4 on Classification on InDL
Precise business store front detection enables accurate geo-location of businesses, and further provides input for business categorization, listing generation, etc.
Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference.
Ranked #2 on Object Detection on PASCAL VOC 2012
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks.
Ranked #8 on Retinal OCT Disease Classification on OCT2017
Training Deep Neural Networks is complicated by the factthat the distribution of each layer’s inputs changes duringtraining, as the parameters of the previous layers change. This slows down the training by requiring lower learningrates and careful parameter initialization, and makes it no-toriously hard to train models with saturating nonlineari-ties.
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.
Ranked #461 on Image Classification on ImageNet (Number of params metric, using extra training data)
On MNIST handwritten digits, we show that our model is robust to label corruption.
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence.
Ranked #57 on Image Classification on MNIST
Using the multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with $0. 5$ mAP for a single model and $0. 52$ mAP for an ensemble of two models.
We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014).
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks.
Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012).