Search Results for author: Ameesh Shah

Found 8 papers, 2 papers with code

Deep Policy Optimization with Temporal Logic Constraints

no code implementations17 Apr 2024 Ameesh Shah, Cameron Voloshin, Chenxi Yang, Abhinav Verma, Swarat Chaudhuri, Sanjit A. Seshia

In our work, we consider the setting where the task is specified by an LTL objective and there is an additional scalar reward that we need to optimize.

Reinforcement Learning (RL)

Who Needs to Know? Minimal Knowledge for Optimal Coordination

no code implementations15 Jun 2023 Niklas Lauffer, Ameesh Shah, Micah Carroll, Michael Dennis, Stuart Russell

We apply this algorithm to analyze the strategically relevant information for tasks in both a standard and a partially observable version of the Overcooked environment.

Specification-Guided Data Aggregation for Semantically Aware Imitation Learning

no code implementations29 Mar 2023 Ameesh Shah, Jonathan DeCastro, John Gideon, Beyazit Yalcinkaya, Guy Rosman, Sanjit A. Seshia

Advancements in simulation and formal methods-guided environment sampling have enabled the rigorous evaluation of machine learning models in a number of safety-critical scenarios, such as autonomous driving.

Autonomous Driving Imitation Learning

Demonstration Informed Specification Search

1 code implementation20 Dec 2021 Marcell Vazquez-Chanlatte, Ameesh Shah, Gil Lederman, Sanjit A. Seshia

This paper considers the problem of learning temporal task specifications, e. g. automata and temporal logic, from expert demonstrations.

Learning Differentiable Programs with Admissible Neural Heuristics

1 code implementation NeurIPS 2020 Ameesh Shah, Eric Zhan, Jennifer J. Sun, Abhinav Verma, Yisong Yue, Swarat Chaudhuri

This relaxed program is differentiable and can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search.

Finite Automata Can be Linearly Decoded from Language-Recognizing RNNs

no code implementations ICLR 2019 Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Swarat Chaudhuri, Ankit B. Patel

We study the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.

Clustering

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

no code implementations27 Feb 2019 Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel

We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.