no code implementations • 20 Dec 2022 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.
no code implementations • 30 Nov 2021 • Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena
Large pre-trained language models perform remarkably well on tasks that can be done "in one pass", such as generating realistic text or synthesizing computer programs.
1 code implementation • 16 Aug 2021 • Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton
Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59. 6 percent of the problems from MBPP using few-shot learning with a well-designed prompt.
no code implementations • NeurIPS 2021 • Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, Brenden M. Lake
Human reasoning can often be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2").
2 code implementations • 15 Jun 2021 • Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum
We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.
1 code implementation • ACL 2021 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?
no code implementations • ICLR 2021 • Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama
In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next.
3 code implementations • 15 Jun 2020 • Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
no code implementations • NeurIPS 2019 • Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, Armando Solar-Lezama
We present a neural program synthesis approach integrating components which write, execute, and assess code to navigate the search space of possible programs.
1 code implementation • 17 Feb 2019 • Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, Armando Solar-Lezama
Our goal is to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction.
no code implementations • 17 Jul 2018 • Maxwell Nye, Andrew Saxe
Specifically, we train deep neural networks to learn two simple functions with known efficient solutions: the parity function and the fast Fourier transform.