no code implementations • 26 Jan 2023 • Chenxi Yang, Greg Anderson, Swarat Chaudhuri
This signal is used to guide policy learning, and the abstract interpretation used to construct it directly leads to the robustness certificate returned at convergence.
1 code implementation • 1 Nov 2022 • Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett
A growing body of work studies how to answer a question or verify a claim by generating a natural language "proof": a chain of deductive inferences yielding the answer based on a set of premises.
no code implementations • 10 Oct 2022 • Jennifer J. Sun, Megan Tjandrasuwita, Atharva Sehgal, Armando Solar-Lezama, Swarat Chaudhuri, Yisong Yue, Omar Costilla-Reyes
Neurosymbolic Programming (NP) techniques have the potential to accelerate scientific discovery.
1 code implementation • 28 Sep 2022 • Greg Anderson, Swarat Chaudhuri, Isil Dillig
In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training.
no code implementations • 20 Jun 2022 • Cameron Voloshin, Hoang M. Le, Swarat Chaudhuri, Yisong Yue
We study the problem of policy optimization (PO) with linear temporal logic (LTL) constraints.
2 code implementations • NeurIPS Workshop AIPLANS 2021 • Chenxi Yang, Swarat Chaudhuri
We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic, human-written code.
no code implementations • 16 Jan 2022 • Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri, Greg Durrett
In settings from fact-checking to question answering, we frequently want to know whether a collection of evidence (premises) entails a hypothesis.
no code implementations • NeurIPS 2021 • Rohan Mukherjee, Yeming Wen, Dipak Chaudhari, Thomas W. Reps, Swarat Chaudhuri, Chris Jermaine
State-of-the-art neural models of source code tend to be evaluated on the generation of individual expressions and lines of code, and commonly fail on long-horizon tasks such as the generation of entire method bodies.
1 code implementation • 28 Jul 2021 • Eric Zhan, Jennifer J. Sun, Ann Kennedy, Yisong Yue, Swarat Chaudhuri
We present a framework for the unsupervised learning of neurosymbolic encoders, which are encoders obtained by composing neural networks with symbolic programs from a domain-specific language.
no code implementations • 11 Jun 2021 • Megan Tjandrasuwita, Jennifer J. Sun, Ann Kennedy, Swarat Chaudhuri, Yisong Yue
Hand-annotated data can vary due to factors such as subjective differences, intra-rater variability, and differing annotator expertise.
1 code implementation • EMNLP 2021 • Kaj Bostrom, Xinyu Zhao, Swarat Chaudhuri, Greg Durrett
Natural language is an attractive representation for this purpose -- it is both highly expressive and easy for humans to understand.
1 code implementation • ICCV 2021 • Arkabandhu Chowdhury, Mingchao Jiang, Swarat Chaudhuri, Chris Jermaine
Recent papers have suggested that transfer learning can outperform sophisticated meta-learning methods for few-shot image classification.
1 code implementation • NeurIPS 2020 • Greg Anderson, Abhinav Verma, Isil Dillig, Swarat Chaudhuri
We present Revel, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces.
1 code implementation • NeurIPS 2020 • Ameesh Shah, Eric Zhan, Jennifer J. Sun, Abhinav Verma, Yisong Yue, Swarat Chaudhuri
This relaxed program is differentiable and can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search.
1 code implementation • 17 Apr 2020 • Arkabandhu Chowdhury, Dipak Chaudhari, Swarat Chaudhuri, Chris Jermaine
We present a new approach, called meta-meta classification, to learning in small-data settings.
no code implementations • NeurIPS 2019 • Abhinav Verma, Hoang M. Le, Yisong Yue, Swarat Chaudhuri
First, we view our learning task as optimization in policy space, modulo the constraint that the desired policy has a programmatic representation, and solve this optimization problem using a form of mirror descent that takes a gradient step into the unconstrained policy space and then projects back onto the constrained space.
1 code implementation • 14 May 2019 • Richard Cheng, Abhinav Verma, Gabor Orosz, Swarat Chaudhuri, Yisong Yue, Joel W. Burdick
We show that functional regularization yields a bias-variance trade-off, and propose an adaptive tuning strategy to optimize this trade-off.
no code implementations • ICLR 2019 • Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Swarat Chaudhuri, Ankit B. Patel
We study the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.
no code implementations • 22 Apr 2019 • Greg Anderson, Shankara Pailoor, Isil Dillig, Swarat Chaudhuri
In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks.
no code implementations • 27 Feb 2019 • Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel
We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.
no code implementations • ICML 2018 • Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri
Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language.
2 code implementations • NeurIPS 2018 • Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, Swarat Chaudhuri
We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning.
no code implementations • 29 Jan 2018 • Yue Wang, Swarat Chaudhuri, Lydia E. Kavraki
In this work, we study POMDPs with safe-reachability objectives, which require that with a probability above some threshold, a goal state is eventually reached while keeping the probability of visiting unsafe states below some threshold.
no code implementations • 25 May 2017 • Matthew Amodio, Swarat Chaudhuri, Thomas W. Reps
During generation, NAMs make significantly fewer violations of the constraints of the underlying grammar than RNNs trained only on samples from the language of the grammar.
1 code implementation • ICLR 2018 • Vijayaraghavan Murali, Letao Qi, Swarat Chaudhuri, Chris Jermaine
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired.