1 code implementation • ICML 2020 • Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik
The policy neural network employs a program interpreter that provides immediate feedback on the consequences of the decisions made by the policy, and also takes into account the uncertainty in the symbolic representation of the image.
no code implementations • 13 Aug 2023 • Aaditya Naik, Adam Stein, Yinjun Wu, Mayur Naik, Eric Wong
Finding errors in machine learning applications requires a thorough exploration of their behavior over data.
no code implementations • 25 May 2023 • Adam Stein, Yinjun Wu, Eric Wong, Mayur Naik
It is well-known that real-world changes constituting distribution shift adversely affect model performance.
1 code implementation • 5 May 2023 • HANLIN ZHANG, Jiani Huang, Ziyang Li, Mayur Naik, Eric Xing
We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning.
no code implementations • 15 Apr 2023 • Jiani Huang, Ziyang Li, Mayur Naik, Ser-Nam Lim
We propose LASER, a neuro-symbolic approach to learn semantic video representations that capture rich spatial and temporal properties in video data by leveraging high-level logic specifications.
no code implementations • 10 Apr 2023 • Ziyang Li, Jiani Huang, Mayur Naik
We present Scallop, a language which combines the benefits of deep learning and logical reasoning.
1 code implementation • 2 Mar 2023 • Aaditya Naik, Yinjun Wu, Mayur Naik, Eric Wong
Test-time adaptation reduces these violations by up to 68. 7% with relative performance improvement up to 32%.
1 code implementation • 9 Feb 2023 • Yinjun Wu, Adam Stein, Jacob Gardner, Mayur Naik
In this paper, we study how to learn to identify such a meta sample set from a large, imperfect training set, that is subsequently cleaned and used to optimize performance in the meta re-weighting setting.
no code implementations • NeurIPS 2021 • Jiani Huang, Ziyang Li, Binghong Chen, Karan Samel, Mayur Naik, Le Song, Xujie Si
Deep learning and symbolic reasoning are complementary techniques for an intelligent system.
no code implementations • NeurIPS Workshop DBAI 2021 • Jiani Huang, Ziyang Li, Ilias Fountalis, Mayur Naik
Numerical reasoning over text requires deep integration between the semantic understanding of the natural language context and the mathematical calculation of the symbolic terms.
1 code implementation • ICLR 2022 • Pardis Pashakhanloo, Aaditya Naik, Yuepeng Wang, Hanjun Dai, Petros Maniatis, Mayur Naik
Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider.
1 code implementation • ICLR 2020 • Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, Ke Wang
We present a learning-based approach to detect and fix a broad range of bugs in Javascript programs.
no code implementations • 1 Jun 2019 • Xujie Si, Mukund Raghothaman, Kihong Heo, Mayur Naik
The problem of learning logical rules from examples arises in diverse fields, including program synthesis, logic programming, and machine learning.
no code implementations • ICLR 2019 • Xujie Si, Yuan Yang, Hanjun Dai, Mayur Naik, Le Song
Our framework consists of three components: 1) an encoder, which embeds both the logical specification and grammar at the same time using a graph neural network; 2) a grammar adaptive policy network which enables learning a transferable policy; and 3) a reinforcement learning algorithm that jointly trains the specification and grammar embedding and adaptive policy.
no code implementations • ICLR Workshop drlStructPred 2019 • Halley Young, Osbert Bastani, Mayur Naik
Significant strides have been made toward designing better generative models in recent years.
1 code implementation • NeurIPS 2018 • Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, Le Song
A fundamental problem in program verification concerns inferring loop invariants.
no code implementations • NeurIPS 2010 • Ling Huang, Jinzhu Jia, Bin Yu, Byung-Gon Chun, Petros Maniatis, Mayur Naik
Our two SPORE algorithms are able to build relationships between responses (e. g., the execution time of a computer program) and features, and select a few from hundreds of the retrieved features to construct an explicitly sparse and non-linear model to predict the response variable.