no code implementations • 21 Jun 2024 • Anton Xue, Avishree Khare, Rajeev Alur, Surbhi Goel, Eric Wong

We study how to subvert language models from following the rules.

1 code implementation • 10 Jun 2024 • Alaia Solko-Breslin, Seewon Choi, Ziyang Li, Neelay Velingker, Rajeev Alur, Mayur Naik, Eric Wong

Many computational tasks can be naturally expressed as a composition of a DNN followed by a program written in a traditional programming language or an API call to an LLM.

no code implementations • 26 May 2023 • Rajeev Alur, Osbert Bastani, Kishor Jothimurugan, Mateo Perez, Fabio Somenzi, Ashutosh Trivedi

The difficulty of manually specifying reward functions has led to an interest in using linear temporal logic (LTL) to express objectives for reinforcement learning (RL).

1 code implementation • 6 Feb 2023 • Kishor Jothimurugan, Steve Hsu, Osbert Bastani, Rajeev Alur

We formulate the problem as a two agent zero-sum game in which the adversary picks the sequence of subtasks.

1 code implementation • 7 Jun 2022 • Anton Xue, Lars Lindemann, Rajeev Alur

Neural networks are central to many emerging technologies, but verifying their correctness remains a major challenge.

no code implementations • 6 Jun 2022 • Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur

Our empirical evaluation demonstrates that our algorithm computes equilibrium policies with high social welfare, whereas state-of-the-art baselines either fail to compute Nash equilibria or compute ones with comparatively lower social welfare.

1 code implementation • 2 Apr 2022 • Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, Rajeev Alur

Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.

1 code implementation • NeurIPS 2021 • Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur

Our approach then incorporates reinforcement learning to learn neural network policies for each edge (sub-task) within a Dijkstra-style planning algorithm to compute a high-level plan in the graph.

no code implementations • 29 Oct 2020 • Kishor Jothimurugan, Osbert Bastani, Rajeev Alur

We propose a novel hierarchical reinforcement learning framework for control with continuous state and action spaces.

Hierarchical Reinforcement Learning
reinforcement-learning
**+1**

1 code implementation • NeurIPS 2019 • Kishor Jothimurugan, Rajeev Alur, Osbert Bastani

Reinforcement learning is a promising approach for learning control policies for robot tasks.

1 code implementation • 5 Nov 2018 • Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee

This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.

Systems and Control

no code implementations • 29 Nov 2017 • Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama

Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula phi in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.

no code implementations • 23 Nov 2016 • Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama

Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula $\varphi$ in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.