Search Results for author: Rishabh Singh

Found 44 papers, 14 papers with code

Generating Programmatic Referring Expressions via Program Synthesis

1 code implementation ICML 2020 Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik

The policy neural network employs a program interpreter that provides immediate feedback on the consequences of the decisions made by the policy, and also takes into account the uncertainty in the symbolic representation of the image.

Enumerative Search

Quantifying Model Predictive Uncertainty with Perturbation Theory

no code implementations22 Sep 2021 Rishabh Singh, Jose C. Principe

We propose a framework for predictive uncertainty quantification of a neural network that replaces the conventional Bayesian notion of weight probability density function (PDF) with a physics based potential field representation of the model weights in a Gaussian reproducing kernel Hilbert space (RKHS) embedding.

Deep Geospatial Interpolation Networks

no code implementations15 Aug 2021 Sumit Kumar Varshney, Jeetu Kumar, Aditya Tiwari, Rishabh Singh, Venkata M. V. Gunturi, Narayanan C. Krishnan

Spatio-Temporal interpolation is highly challenging due to the complex spatial and temporal relationships.

SpreadsheetCoder: Formula Prediction from Semi-structured Context

1 code implementation26 Jun 2021 Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, Denny Zhou

In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data.

Program Synthesis

A Kernel Framework to Quantify a Model's Local Predictive Uncertainty under Data Distributional Shifts

no code implementations2 Mar 2021 Rishabh Singh, Jose C. Principe

We therefore propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space (before activation), p(y'|x, w), which we refer to as the model PDF, in a Gaussian reproducing kernel Hilbert space (RKHS).

Latent Programmer: Discrete Latent Codes for Program Synthesis

no code implementations1 Dec 2020 Joey Hong, David Dohan, Rishabh Singh, Charles Sutton, Manzil Zaheer

The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task.

Document Summarization Program Synthesis +1

Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration

no code implementations NeurIPS 2020 Hanjun Dai, Rishabh Singh, Bo Dai, Charles Sutton, Dale Schuurmans

In this paper we propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data, where parameter gradients are estimated using a learned sampler that mimics local search.

Language Modelling

Deep Learning & Software Engineering: State of Research and Future Directions

1 code implementation17 Sep 2020 Prem Devanbu, Matthew Dwyer, Sebastian Elbaum, Michael Lowry, Kevin Moran, Denys Poshyvanyk, Baishakhi Ray, Rishabh Singh, Xiangyu Zhang

The intent of this report is to serve as a potential roadmap to guide future work that sits at the intersection of SE & DL.

Scaling Symbolic Methods using Gradients for Neural Model Explanation

2 code implementations ICLR 2021 Subham Sekhar Sahoo, Subhashini Venugopalan, Li Li, Rishabh Singh, Patrick Riley

In this work, we propose a technique for combining gradient-based methods with symbolic techniques to scale such analyses and demonstrate its application for model explanation.

Neural Program Synthesis with a Differentiable Fixer

no code implementations19 Jun 2020 Matej Balog, Rishabh Singh, Petros Maniatis, Charles Sutton

We present a new program synthesis approach that combines an encoder-decoder based synthesis architecture with a differentiable program fixer.

Program Synthesis

Global Relational Models of Source Code

1 code implementation ICLR 2020 Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, David Bieber

By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.

Variable misuse

TF-Coder: Program Synthesis for Tensor Manipulations

2 code implementations19 Mar 2020 Kensen Shi, David Bieber, Rishabh Singh

The success and popularity of deep learning is on the rise, partially due to powerful deep learning frameworks such as TensorFlow and PyTorch that make it easier to develop deep learning models.

Enumerative Search

Towards Modular Algorithm Induction

no code implementations27 Feb 2020 Daniel A. Abolafia, Rishabh Singh, Manzil Zaheer, Charles Sutton

Main consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices.

Towards a Kernel based Uncertainty Decomposition Framework for Data and Models

no code implementations30 Jan 2020 Rishabh Singh, Jose C. Principe

This paper introduces a new framework for quantifying predictive uncertainty for both data and models that relies on projecting the data into a Gaussian reproducing kernel Hilbert space (RKHS) and transforming the data probability density function (PDF) in a way that quantifies the flow of its gradient as a topological potential field quantified at all points in the sample space.

Time Series

Synthetic Datasets for Neural Program Synthesis

no code implementations ICLR 2019 Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, Dawn Song

The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e. g. input-output behavior.

Program Synthesis

Learning Transferable Graph Exploration

no code implementations NeurIPS 2019 Hanjun Dai, Yujia Li, Chenglong Wang, Rishabh Singh, Po-Sen Huang, Pushmeet Kohli

We propose a `learning to explore' framework where we learn a policy from a distribution of environments.

Efficient Exploration

MoET: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning

1 code implementation16 Jun 2019 Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid

By training MoET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models.

Game of Go Imitation Learning +2

Neural Program Repair by Jointly Learning to Localize and Repair

2 code implementations ICLR 2019 Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh

We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.

Variable misuse

Neural-Guided Symbolic Regression with Asymptotic Constraints

1 code implementation23 Jan 2019 Li Li, Minjie Fan, Rishabh Singh, Patrick Riley

The second part, which we call Neural-Guided Monte Carlo Tree Search, uses the network during a search to find an expression that conforms to a set of data points and desired leading powers.

Robust Text-to-SQL Generation with Execution-Guided Decoding

1 code implementation9 Jul 2018 Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, Rishabh Singh

We consider the problem of neural semantic parsing, which translates natural language questions into executable SQL queries.

Semantic Parsing Text-To-Sql

Towards Mixed Optimization for Reinforcement Learning with Program Synthesis

no code implementations1 Jul 2018 Surya Bhupatiraju, Kumar Krishna Agrawal, Rishabh Singh

Deep reinforcement learning has led to several recent breakthroughs, though the learned policies are often based on black-box neural networks.

Program Repair

Programmatically Interpretable Reinforcement Learning

no code implementations ICML 2018 Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri

Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language.

Car Racing

Learning and analyzing vector encoding of symbolic representations

no code implementations10 Mar 2018 Roland Fernandez, Asli Celikyilmaz, Rishabh Singh, Paul Smolensky

We present a formal language with expressions denoting general symbol structures and queries which access information in those structures.

Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections

no code implementations NeurIPS 2018 Xin Zhang, Armando Solar-Lezama, Rishabh Singh

We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output.

Deep Reinforcement Fuzzing

no code implementations14 Jan 2018 Konstantin Böttinger, Patrice Godefroid, Rishabh Singh

Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs.


Pointing Out SQL Queries From Text

no code implementations ICLR 2018 Chenglong Wang, Marc Brockschmidt, Rishabh Singh

We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query.

Dynamic Neural Program Embeddings for Program Repair

no code implementations ICLR 2018 Ke Wang, Rishabh Singh, Zhendong Su

Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees.

Code Completion Fault localization

SyGuS-Comp 2017: Results and Analysis

no code implementations29 Nov 2017 Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama

Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula phi in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.

Dynamic Neural Program Embedding for Program Repair

1 code implementation20 Nov 2017 Ke Wang, Rishabh Singh, Zhendong Su

Evaluation results show that our new semantic program embedding significantly outperforms the syntactic program embeddings based on token sequences and abstract syntax trees.

Fault localization

Not all bytes are equal: Neural byte sieve for fuzzing

no code implementations10 Nov 2017 Mohit Rajpal, William Blum, Rishabh Singh

Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software.

Semantic Code Repair using Neuro-Symbolic Transformation Networks

no code implementations ICLR 2018 Jacob Devlin, Jonathan Uesato, Rishabh Singh, Pushmeet Kohli

We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.

Code Repair

Neural Program Meta-Induction

no code implementations NeurIPS 2017 Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli

In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning.

Program induction Transfer Learning

Deep API Programmer: Learning to Program with APIs

no code implementations14 Apr 2017 Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli

We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples.

RobustFill: Neural Program Learning under Noisy I/O

3 code implementations ICML 2017 Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli

Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.

Program induction Program Synthesis

Learn&Fuzz: Machine Learning for Input Fuzzing

1 code implementation25 Jan 2017 Patrice Godefroid, Hila Peleg, Rishabh Singh

Fuzzing consists of repeatedly testing an application with modified, or fuzzed, inputs with the goal of finding security vulnerabilities in input-parsing code.

SyGuS-Comp 2016: Results and Analysis

no code implementations23 Nov 2016 Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama

Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula $\varphi$ in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.

Neuro-Symbolic Program Synthesis

no code implementations6 Nov 2016 Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, Pushmeet Kohli

While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network).

Program induction Program Synthesis

TerpreT: A Probabilistic Programming Language for Program Induction

no code implementations15 Aug 2016 Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow

TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations).

Probabilistic Programming Program induction +1

Automated Correction for Syntax Errors in Programming Assignments using Recurrent Neural Networks

no code implementations19 Mar 2016 Sahil Bhatia, Rishabh Singh

We present a technique for providing feedback on syntax errors that uses Recurrent neural networks (RNNs) to model syntactically valid token sequences.

Cannot find the paper you are looking for? You can Submit a new open access paper.