Search Results for author: Markus N. Rabe

Found 14 papers, 7 papers with code

Baldur: Whole-Proof Generation and Repair with Large Language Models

no code implementations8 Mar 2023 Emily First, Markus N. Rabe, Talia Ringer, Yuriy Brun

Recent work has developed methods to automate formal verification using proof assistants, such as Coq and Isabelle/HOL, e. g., by training a model to predict one proof step at a time, and using that model to search through the space of possible proofs.

Autoformalization with Large Language Models

no code implementations25 May 2022 Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy

Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.

 Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)

Automated Theorem Proving Program Synthesis

Memorizing Transformers

3 code implementations ICLR 2022 Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.

Language Modelling Math

Self-attention Does Not Need $O(n^2)$ Memory

10 code implementations10 Dec 2021 Markus N. Rabe, Charles Staats

We present a very simple algorithm for attention that requires $O(1)$ memory with respect to sequence length and an extension to self-attention that requires $O(\log n)$ memory.

Neural Circuit Synthesis from Specification Patterns

1 code implementation NeurIPS 2021 Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner

We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).

Teaching Temporal Logics to Neural Networks

2 code implementations ICLR 2021 Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner

We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.

Mathematical Reasoning in Latent Space

no code implementations ICLR 2020 Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Kshitij Bansal

We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space.

Mathematical Reasoning

Learning to Reason in Large Theories without Imitation

no code implementations25 May 2019 Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Viktor Toman

Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs.

Automated Theorem Proving Imitation Learning +2

A Model Counter's Guide to Probabilistic Systems

no code implementations22 Mar 2019 Marcell Vazquez-Chanlatte, Markus N. Rabe, Sanjit A. Seshia

In this paper, we systematize the modeling of probabilistic systems for the purpose of analyzing them with model counting techniques.

Learning Heuristics for Quantified Boolean Formulas through Deep Reinforcement Learning

1 code implementation20 Jul 2018 Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia

We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Temporal Logics for Hyperproperties

1 code implementation17 Jan 2014 Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, César Sánchez

Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest.

Logic in Computer Science

Cannot find the paper you are looking for? You can Submit a new open access paper.