no code implementations • 8 Mar 2023 • Emily First, Markus N. Rabe, Talia Ringer, Yuriy Brun
Recent work has developed methods to automate formal verification using proof assistants, such as Coq and Isabelle/HOL, e. g., by training a model to predict one proof step at a time, and using that model to search through the space of possible proofs.
no code implementations • 25 May 2022 • Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.
Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)
4 code implementations • ICLR 2022 • Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy
Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.
17 code implementations • 10 Dec 2021 • Markus N. Rabe, Charles Staats
We present a very simple algorithm for attention that requires $O(1)$ memory with respect to sequence length and an extension to self-attention that requires $O(\log n)$ memory.
1 code implementation • NeurIPS 2021 • Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).
no code implementations • ICLR 2021 • Markus N. Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy
We examine whether self-supervised language modeling applied to mathematical formulas enables logical reasoning.
2 code implementations • ICLR 2021 • Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.
no code implementations • ICLR 2020 • Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Kshitij Bansal
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space.
no code implementations • 25 May 2019 • Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Viktor Toman
Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs.
Ranked #3 on Automated Theorem Proving on HOList benchmark
no code implementations • ICLR 2019 • Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia
We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning.
3 code implementations • 5 Apr 2019 • Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, Stewart Wilcox
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic.
Ranked #2 on Automated Theorem Proving on HOList benchmark
no code implementations • 22 Mar 2019 • Marcell Vazquez-Chanlatte, Markus N. Rabe, Sanjit A. Seshia
In this paper, we systematize the modeling of probabilistic systems for the purpose of analyzing them with model counting techniques.
1 code implementation • 20 Jul 2018 • Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.
1 code implementation • 17 Jan 2014 • Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, César Sánchez
Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest.
Logic in Computer Science