no code implementations • 18 Apr 2024 • Abhinav Lalwani, Tasha Kim, Lovish Chopra, Christopher Hahn, Zhijing Jin, Mrinmaya Sachan
In this paper, we introduce Natural Language to First-Order Logic (NL2FOL), a framework to autoformalize natural language to FOL step by step using Large Language Models (LLMs).
1 code implementation • 2 Apr 2024 • Joel Niklaus, Lucia Zheng, Arya D. McCarthy, Christopher Hahn, Brian M. Rosen, Peter Henderson, Daniel E. Ho, Garrett Honke, Percy Liang, Christopher Manning
We publish LawInstruct as a resource for further study of instruction tuning in the legal domain.
1 code implementation • 22 Jan 2024 • Matthias Cosler, Christopher Hahn, Ayham Omar, Frederik Schmitt
At the core of the solver lies a seamless integration of neural and symbolic approaches to solving the reactive synthesis problem.
1 code implementation • 18 May 2023 • Haoze Wu, Christopher Hahn, Florian Lonsing, Makai Mann, Raghuram Ramanujan, Clark Barrett
We present Self-Driven Strategy Learning ($\textit{sdsl}$), a lightweight online learning methodology for automated reasoning tasks that involve solving a set of related problems.
no code implementations • 8 Mar 2023 • Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt, Caroline Trippel
A rigorous formalization of desired system requirements is indispensable when performing any verification task.
1 code implementation • 2 Mar 2023 • Matthias Cosler, Frederik Schmitt, Christopher Hahn, Bernd Finkbeiner
We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit.
no code implementations • 4 Jun 2022 • Christopher Hahn, Frederik Schmitt, Julia J. Tillman, Niklas Metzger, Julian Siber, Bernd Finkbeiner
We study the generalization abilities of language models when translating natural language into formal specifications with complex semantics.
1 code implementation • 30 May 2022 • Niklas Metzger, Christopher Hahn, Julian Siber, Frederik Schmitt, Bernd Finkbeiner
In this paper, we study the computation of how much an input token in a Transformer model influences its prediction.
1 code implementation • 19 Oct 2021 • Jens U. Kreber, Christopher Hahn
We study the capabilities of GANs and Wasserstein GANs equipped with Transformer encoders to generate sensible and challenging training data for symbolic reasoning domains.
1 code implementation • NeurIPS 2021 • Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).
no code implementations • 18 Jan 2021 • Bernd Finkbeiner, Christopher Hahn, Jana Hofmann, Leander Tentrup
We furthermore studied the realizability problem of HyperQPTL.
Logic in Computer Science
no code implementations • 18 Jan 2021 • Bernd Finkbeiner, Christopher Hahn, Marvin Stenger, Leander Tentrup
Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other and are thus not monitorable by tools that consider computations in isolation.
Logic in Computer Science
2 code implementations • ICLR 2021 • Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.
4 code implementations • 31 May 2019 • Christopher Hahn, Marvin Stenger, Leander Tentrup
Verifying hyperproperties at runtime is a challenging problem as hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other.
Logic in Computer Science