1 code implementation • 18 Jul 2024 • Charles Jin, Martin Rinard
As language models (LMs) deliver increasing performance on a range of NLP tasks, probing classifiers have become an indispensable technique in the effort to better understand their inner workings.
no code implementations • 11 Jul 2023 • Farid Arthaud, Martin Rinard
Existing logics typically assume the ability of agents to reason perfectly about propositions of unbounded modal depth.
no code implementations • 8 Jun 2023 • Kai Jia, Pasapol Saowakon, Limor Appelbaum, Martin Rinard
We take a formal approach to the explainability problem of machine learning systems.
1 code implementation • 18 May 2023 • Charles Jin, Martin Rinard
Each program in the corpus is preceded by a (partial) specification in the form of several input-output grid world states.
no code implementations • 21 Apr 2023 • Kai Jia, Martin Rinard
$L_0$ regularization of neural networks is a fundamental problem.
no code implementations • 29 Sep 2021 • Charles Jin, Melinda Sun, Martin Rinard
We propose an iterative training procedure for removing poisoned data from the training set.
no code implementations • 29 Sep 2021 • Charles Jin, Martin Rinard
Crucially, our models are simultaneously robust against multiple state-of-the-art adversaries, suggesting that the robustness generalizes well to \textit{unseen} adversaries.
no code implementations • 18 Aug 2021 • Kai Jia, Martin Rinard
We propose to prepend an input quantization layer to the network.
1 code implementation • 8 May 2021 • Charles Jin, Melinda Sun, Martin Rinard
We propose a novel clustering mechanism based on an incompatibility property between subsets of data that emerges during model training.
no code implementations • 27 Apr 2021 • Shivam Handa, Martin Rinard
Both Rose and the previous system synthesize programs that are optimal over the provided noisy data sets.
no code implementations • 8 Mar 2021 • Shivam Handa, Martin Rinard
We also formalize the concept and conditions required for convergence, i. e., conditions under which the probability that the synthesis algorithm produces a correct program increases as the size of the noisy data set increases.
1 code implementation • NeurIPS 2021 • Yichen David Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu, Armando Solar-Lezama, Martin Rinard
Our results demonstrate that our approach can obtain the benefits of program-guided reinforcement learning without requiring the user to provide a new guiding program for every new task.
1 code implementation • NeurIPS 2020 • Jeevana Priya Inala, Yichen Yang, James Paulos, Yewen Pu, Osbert Bastani, Vijay Kumar, Martin Rinard, Armando Solar-Lezama
We study the problem of inferring communication structures that can solve cooperative multi-agent planning problems while minimizing the amount of communication.
no code implementations • 1 Jan 2021 • Charles Jin, Martin Rinard
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.
1 code implementation • NeurIPS 2021 • Charles Jin, Martin Rinard
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.
1 code implementation • NeurIPS 2020 • Kai Jia, Martin Rinard
We present a new system, EEV, for efficient and exact verification of BNNs.
1 code implementation • 9 Mar 2020 • Charles Jin, Martin Rinard
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks.
1 code implementation • 6 Mar 2020 • Kai Jia, Martin Rinard
For a pretrained neural network, we present a method that efficiently searches inputs as witnesses for the incorrectness of robustness claims made by a complete verifier.
1 code implementation • 3 Jun 2019 • Yichen Yang, Martin Rinard
The presented framework also enables detecting illegal inputs -- inputs that are not contained in (or close to) the target input space as defined by the state space and observation process (the neural network is not designed to work on them), so that we can flag when we don't have guarantees.
1 code implementation • 6 Apr 2018 • Phillip Stanley-Marbell, Martin Rinard
We present Warp, a hardware platform to support research in approximate computing, sensor energy optimization, and energy-scavenged systems.
Applied Physics Hardware Architecture Emerging Technologies Robotics Instrumentation and Detectors
no code implementations • 22 Mar 2018 • Jose Cambronero, Phillip Stanley-Marbell, Martin Rinard
We introduce DaltonQuant, a new color quantization technique for image compression that cloud services can apply to images destined for a specific user with known color vision deficiencies.
no code implementations • 20 Mar 2018 • Justin Gottschlich, Armando Solar-Lezama, Nesime Tatbul, Michael Carbin, Martin Rinard, Regina Barzilay, Saman Amarasinghe, Joshua B. Tenenbaum, Tim Mattson
In this position paper, we describe our vision of the future of machine programming through a categorical examination of three pillars of research.
1 code implementation • 20 Jun 2016 • Fereshte Khani, Martin Rinard, Percy Liang
Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output.