Search Results for author: Gabriel Grand

Found 13 papers, 10 papers with code

Stream of Search (SoS): Learning to Search in Language

1 code implementation1 Apr 2024 Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman

In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS).

Language Modelling

LILO: Learning Interpretable Libraries by Compressing and Documenting Code

1 code implementation30 Oct 2023 Gabriel Grand, Lionel Wong, Maddy Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas

While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs.

Code Generation Program Synthesis

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

1 code implementation22 Jun 2023 Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum

Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.

Probabilistic Programming Relational Reasoning

Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs

2 code implementations5 Jun 2023 Alexander K. Lew, Tan Zhi-Xuan, Gabriel Grand, Vikash K. Mansinghka

Even after fine-tuning and reinforcement learning, large language models (LLMs) can be difficult, if not impossible, to control reliably with prompts alone.

Language Modelling Probabilistic Programming +1

Evaluating statistical language models as pragmatic reasoners

1 code implementation1 May 2023 Benjamin Lipkin, Lionel Wong, Gabriel Grand, Joshua B Tenenbaum

These results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications.

Negation Semantic Parsing

Top-Down Synthesis for Library Learning

1 code implementation29 Nov 2022 Matthew Bowers, Theo X. Olausson, Lionel Wong, Gabriel Grand, Joshua B. Tenenbaum, Kevin Ellis, Armando Solar-Lezama

This paper introduces corpus-guided top-down synthesis as a mechanism for synthesizing library functions that capture common functionality from a corpus of programs in a domain specific language (DSL).

ChemBERTa-2: Towards Chemical Foundation Models

2 code implementations5 Sep 2022 Walid Ahmad, Elana Simon, Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar

Large pretrained models such as GPT-3 have had tremendous impact on modern natural language processing by leveraging self-supervised learning to learn salient representations that can be used to readily finetune on a wide variety of downstream tasks.

Molecular Property Prediction Self-Supervised Learning

Identifying concept libraries from language about object structure

1 code implementation11 May 2022 Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan

Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.

2k Machine Translation +2

Adversarial Regularization for Visual Question Answering: Strengths, Shortcomings, and Side Effects

1 code implementation NAACL 2019 Gabriel Grand, Yonatan Belinkov

Visual question answering (VQA) models have been shown to over-rely on linguistic biases in VQA datasets, answering questions "blindly" without considering visual context.

Question Answering Visual Question Answering

Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings

no code implementations5 Feb 2018 Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko

Because related words appear in similar contexts, such spaces - called "word embeddings" - can be learned from patterns of lexical co-occurrences in natural language.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.