Search Results for author: Maxwell Nye

Found 11 papers, 5 papers with code

Language Modeling with Latent Situations

no code implementations20 Dec 2022 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.

Language Modelling

Show Your Work: Scratchpads for Intermediate Computation with Language Models

no code implementations30 Nov 2021 Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena

Large pre-trained language models perform remarkably well on tasks that can be done "in one pass", such as generating realistic text or synthesizing computer programs.

Program Synthesis with Large Language Models

1 code implementation16 Aug 2021 Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton

Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59. 6 percent of the problems from MBPP using few-shot learning with a well-designed prompt.

Few-Shot Learning Program Synthesis

Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

no code implementations NeurIPS 2021 Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, Brenden M. Lake

Human reasoning can often be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2").

Instruction Following Logical Reasoning +1

Communicating Natural Programs to Humans and Machines

2 code implementations15 Jun 2021 Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum

We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.

Program Synthesis

Implicit Representations of Meaning in Neural Language Models

1 code implementation ACL 2021 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?

Text Generation

Representing Partial Programs with Blended Abstract Semantics

no code implementations ICLR 2021 Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama

In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next.

Program Synthesis

DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

3 code implementations15 Jun 2020 Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum

It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.

Drawing Pictures Program induction +1

Write, Execute, Assess: Program Synthesis with a REPL

no code implementations NeurIPS 2019 Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, Armando Solar-Lezama

We present a neural program synthesis approach integrating components which write, execute, and assess code to navigate the search space of possible programs.

Navigate Program Synthesis

Learning to Infer Program Sketches

1 code implementation17 Feb 2019 Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, Armando Solar-Lezama

Our goal is to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction.

Memorization Program Synthesis

Are Efficient Deep Representations Learnable?

no code implementations17 Jul 2018 Maxwell Nye, Andrew Saxe

Specifically, we train deep neural networks to learn two simple functions with known efficient solutions: the parity function and the fast Fourier transform.

Cannot find the paper you are looking for? You can Submit a new open access paper.