Search Results for author: Jacob Andreas

Found 105 papers, 57 papers with code

Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling

no code implementations21 Mar 2024 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning.

Grounded language learning Language Modelling +2

Bayesian Preference Elicitation with Language Models

no code implementations8 Mar 2024 Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li

We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions.

Experimental Design

In-Context Language Learning: Architectures and Algorithms

1 code implementation23 Jan 2024 Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas

Finally, we show that hard-wiring these heads into neural models improves performance not just on ICLL, but natural language modeling -- improving the perplexity of 340M-parameter models by up to 1. 14 points (6. 7%) on the SlimPajama dataset.

In-Context Learning Language Modelling

Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability

no code implementations16 Jan 2024 Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas

Given a collection of seed documents, DCT prompts LMs to generate additional text implied by these documents, reason globally about the correctness of this generated text, and finally fine-tune on text inferred to be correct.

Fact Verification Text Generation

Learning adaptive planning representations with natural language guidance

no code implementations13 Dec 2023 Lionel Wong, Jiayuan Mao, Pratyusha Sharma, Zachary S. Siegel, Jiahai Feng, Noa Korneev, Joshua B. Tenenbaum, Jacob Andreas

Effective planning in the real world requires not only world knowledge, but the ability to leverage that knowledge to build the right representation of the task at hand.

Decision Making World Knowledge

Evaluating the Utility of Model Explanations for Model Development

no code implementations10 Dec 2023 Shawn Im, Jacob Andreas, Yilun Zhou

One of the motivations for explainable AI is to allow humans to make better and more informed decisions regarding the use and deployment of AI models.

counterfactual Decision Making +1

Modeling Boundedly Rational Agents with Latent Inference Budgets

no code implementations7 Dec 2023 Athul Paul Jacob, Abhishek Gupta, Jacob Andreas

We study the problem of modeling a population of agents pursuing unknown goals subject to unknown computational constraints.

Decision Making Decision Making Under Uncertainty

Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning

no code implementations16 Nov 2023 Athul Paul Jacob, Gabriele Farina, Jacob Andreas

We present a model of pragmatic language understanding, where utterances are produced and understood by searching for regularized equilibria of signaling games.

Implicatures

Interpreting User Requests in the Context of Natural Language Standing Instructions

1 code implementation16 Nov 2023 Nikita Moghe, Patrick Xia, Jacob Andreas, Jason Eisner, Benjamin Van Durme, Harsh Jhamtani

Users of natural language interfaces, generally powered by Large Language Models (LLMs), often must repeat their preferences each time they make a similar request.

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

1 code implementation15 Nov 2023 Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang

Uncertainty decomposition refers to the task of decomposing the total uncertainty of a model into data (aleatoric) uncertainty, resulting from the inherent complexity or ambiguity of the data, and model (epistemic) uncertainty, resulting from the lack of knowledge in the model.

Uncertainty Quantification

LILO: Learning Interpretable Libraries by Compressing and Documenting Code

1 code implementation30 Oct 2023 Gabriel Grand, Lionel Wong, Maddy Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas

While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs.

Code Generation Program Synthesis

Pushdown Layers: Encoding Recursive Structure in Transformer Language Models

1 code implementation29 Oct 2023 Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning

Recursion is a prominent feature of human language, and fundamentally challenging for self-attention due to the lack of an explicit recursive-state tracking mechanism.

text-classification Text Classification

Visual Grounding Helps Learn Word Meanings in Low-Data Regimes

1 code implementation20 Oct 2023 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context.

Image Captioning Language Acquisition +5

Eliciting Human Preferences with Language Models

1 code implementation17 Oct 2023 Belinda Z. Li, Alex Tamkin, Noah Goodman, Jacob Andreas

Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.

The Consensus Game: Language Model Generation via Equilibrium Search

no code implementations13 Oct 2023 Athul Paul Jacob, Yikang Shen, Gabriele Farina, Jacob Andreas

When applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using them to score or rank a set of candidate outputs).

Language Modelling Question Answering +2

FIND: A Function Description Benchmark for Evaluating Interpretability Methods

1 code implementation NeurIPS 2023 Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba

FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.

Linearity of Relation Decoding in Transformer Language Models

no code implementations17 Aug 2023 Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, David Bau

Linear relation representations may be obtained by constructing a first-order approximation to the LM from a single prompt, and they exist for a variety of factual, commonsense, and linguistic relations.

Relation

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

1 code implementation22 Jun 2023 Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum

Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.

Probabilistic Programming Relational Reasoning

Decision-Oriented Dialogue for Human-AI Collaboration

1 code implementation31 May 2023 Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner

In each of these settings, AI assistants and users have disparate abilities that they must combine to arrive at the best decision: assistants can access and process large amounts of information, while users have preferences and constraints external to the system.

Grokking of Hierarchical Structure in Vanilla Transformers

1 code implementation30 May 2023 Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning

When analyzing the relationship between model-internal properties and grokking, we find that optimal depth for grokking can be identified using the tree-structuredness metric of \citet{murty2023projections}.

Natural Language Decomposition and Interpretation of Complex Utterances

no code implementations15 May 2023 Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Ben Van Durme

Designing natural language interfaces has historically required collecting supervised data to translate user requests into carefully designed intent representations.

Language Modelling

Language Models Trained on Media Diets Can Predict Public Opinion

no code implementations28 Mar 2023 Eric Chu, Jacob Andreas, Stephen Ansolabehere, Deb Roy

Public opinion reflects and shapes societal behavior, but the traditional survey-based tools to measure it are limited.

Probing Language Models

LaMPP: Language Models as Probabilistic Priors for Perception and Action

no code implementations3 Feb 2023 Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas

Language models trained on large text corpora encode rich distributional information about real-world environments and action sequences.

Activity Recognition Decision Making +2

Language Modeling with Latent Situations

no code implementations20 Dec 2022 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.

Language Modelling

PromptBoosting: Black-Box Text Classification with Ten Forward Passes

1 code implementation19 Dec 2022 Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang

Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution.

Language Modelling text-classification +1

Language Models as Agent Models

no code implementations3 Dec 2022 Jacob Andreas

Language models (LMs) are trained on collections of documents, written by individual human agents to achieve specific goals in an outside world.

What learning algorithm is in-context learning? Investigations with linear models

no code implementations28 Nov 2022 Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou

We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context.

In-Context Learning regression

Hierarchical Phrase-based Sequence-to-Sequence Learning

1 code implementation15 Nov 2022 Bailin Wang, Ivan Titov, Jacob Andreas, Yoon Kim

We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.

Inductive Bias Machine Translation +2

Characterizing Intrinsic Compositionality in Transformers with Tree Projections

no code implementations2 Nov 2022 Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning

To evaluate this possibility, we describe an unsupervised and parameter-free method to \emph{functionally project} the behavior of any transformer into the space of tree-structured networks.

Sentence

ObSynth: An Interactive Synthesis System for Generating Object Models from Natural Language Specifications

no code implementations20 Oct 2022 Alex Gu, Tamara Mitrovska, Daniela Velez, Jacob Andreas, Armando Solar-Lezama

We introduce ObSynth, an interactive system leveraging the domain knowledge embedded in large language models (LLMs) to help users design object models from high level natural language prompts.

Object

Towards Tracing Factual Knowledge in Language Models Back to the Training Data

1 code implementation23 May 2022 Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu

In this paper, we propose the problem of fact tracing: identifying which training examples taught an LM to generate a particular factual assertion.

Information Retrieval Retrieval

Identifying concept libraries from language about object structure

1 code implementation11 May 2022 Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan

Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.

Machine Translation Object +1

Teachable Reinforcement Learning via Advice Distillation

1 code implementation NeurIPS 2021 Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, Abhishek Gupta

Training automated agents to complete complex tasks in interactive environments is challenging: reinforcement learning requires careful hand-engineering of reward functions, imitation learning requires specialized infrastructure and access to a human expert, and learning from intermediate forms of supervision (like binary preferences) is time-consuming and extracts little information from each human intervention.

Imitation Learning reinforcement-learning +1

Pre-Trained Language Models for Interactive Decision-Making

1 code implementation3 Feb 2022 Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, Yuke Zhu

Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but also goals and plans; these representations can aid learning and generalization even outside of language processing.

Imitation Learning Language Modelling

Compositionality as Lexical Symmetry

1 code implementation30 Jan 2022 Ekin Akyürek, Jacob Andreas

In tasks like semantic parsing, instruction following, and question answering, standard deep networks fail to generalize compositionally from small datasets.

Data Augmentation Inductive Bias +5

Natural Language Descriptions of Deep Visual Features

2 code implementations26 Jan 2022 Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas

Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.

Attribute

Modeling Strong and Human-Like Gameplay with KL-Regularized Search

no code implementations14 Dec 2021 Athul Paul Jacob, David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Jacob Andreas, Noam Brown

We consider the task of building strong but human-like policies in multi-agent decision-making problems, given examples of human behavior.

Imitation Learning

Quantifying Adaptability in Pre-trained Language Models with 500 Tasks

2 code implementations NAACL 2022 Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas

When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model?

Language Modelling Logical Reasoning +2

Subspace Regularizers for Few-Shot Class Incremental Learning

1 code implementation ICLR 2022 Afra Feyza Akyürek, Ekin Akyürek, Derry Tanti Wijaya, Jacob Andreas

The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.

Few-Shot Class-Incremental Learning Image Classification +2

Toward a Visual Concept Vocabulary for GAN Latent Space

1 code implementation ICCV 2021 Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba

A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images.

Disentanglement

Skill Induction and Planning with Latent Language

no code implementations ACL 2022 Pratyusha Sharma, Antonio Torralba, Jacob Andreas

We evaluate this approach in the ALFRED household simulation environment, providing natural language annotations for only 10% of demonstrations.

Decision Making

Language Model Pre-training Improves Generalization in Policy Learning

no code implementations29 Sep 2021 Shuang Li, Xavier Puig, Yilun Du, Ekin Akyürek, Antonio Torralba, Jacob Andreas, Igor Mordatch

Additional experiments explore the role of language-based encodings in these results; we find that it is possible to train a simple adapter layer that maps from observations and action histories to LM embeddings, and thus that language modeling provides an effective initializer even for tasks with no language as input or output.

Imitation Learning Language Modelling

Natural Language Descriptions of Deep Features

no code implementations ICLR 2022 Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas

Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.

Attribute

Lexicon Learning for Few Shot Sequence Modeling

1 code implementation ACL 2021 Ekin Akyurek, Jacob Andreas

Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following.

Instruction Following Machine Translation +3

Value-Agnostic Conversational Semantic Parsing

no code implementations ACL 2021 Emmanouil Antonios Platanios, Adam Pauls, Subhro Roy, Yuchen Zhang, Alexander Kyte, Alan Guo, Sam Thomson, Jayant Krishnamurthy, Jason Wolfe, Jacob Andreas, Dan Klein

Conversational semantic parsers map user utterances to executable programs given dialogue histories composed of previous utterances, programs, and system responses.

Computational Efficiency Semantic Parsing

FairyTailor: A Multimodal Generative Framework for Storytelling

1 code implementation13 Jul 2021 Eden Bensaid, Mauro Martino, Benjamin Hoover, Jacob Andreas, Hendrik Strobelt

Natural language generation (NLG) for storytelling is especially challenging because it requires the generated text to follow an overall theme while remaining creative and diverse to engage the reader.

Story Generation

Leveraging Language to Learn Program Abstractions and Search Heuristics

no code implementations18 Jun 2021 Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas

Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems.

Program Synthesis

What Context Features Can Transformer Language Models Use?

1 code implementation ACL 2021 Joe O'Connor, Jacob Andreas

Transformer-based language models benefit from conditioning on contexts of hundreds to thousands of previous tokens.

Lexicon Learning for Few-Shot Neural Sequence Modeling

1 code implementation7 Jun 2021 Ekin Akyürek, Jacob Andreas

Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following.

Instruction Following Machine Translation +3

Implicit Representations of Meaning in Neural Language Models

1 code implementation ACL 2021 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?

Text Generation

The Low-Dimensional Linear Geometry of Contextualized Word Representations

no code implementations CoNLL (EMNLP) 2021 Evan Hernandez, Jacob Andreas

We show that a variety of linguistic features (including structured dependency relationships) are encoded in low-dimensional subspaces.

Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales

no code implementations17 Apr 2021 Jacob Andreas, Gašper Beguš, Michael M. Bronstein, Roee Diamant, Denley Delaney, Shane Gero, Shafi Goldwasser, David F. Gruber, Sarah de Haas, Peter Malkin, Roger Payne, Giovanni Petri, Daniela Rus, Pratyusha Sharma, Dan Tchernov, Pernille Tønnesen, Antonio Torralba, Daniel Vogt, Robert J. Wood

We posit that machine learning will be the cornerstone of future collection, processing, and analysis of multimodal streams of data in animal communication studies, including bioacoustic, behavioral, biological, and environmental data.

BIG-bench Machine Learning Sentence +1

Multitasking Inhibits Semantic Drift

no code implementations NAACL 2021 Athul Paul Jacob, Mike Lewis, Jacob Andreas

When intelligent agents communicate to accomplish shared goals, how do these goals shape the agents' language?

Representing Partial Programs with Blended Abstract Semantics

no code implementations ICLR 2021 Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama

In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next.

Program Synthesis

Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary Edema Assessment

1 code implementation22 Aug 2020 Geeticka Chauhan, Ruizhi Liao, William Wells, Jacob Andreas, Xin Wang, Seth Berkowitz, Steven Horng, Peter Szolovits, Polina Golland

To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time.

Image Classification Representation Learning

Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

no code implementations23 Jul 2020 Eric Chu, Deb Roy, Jacob Andreas

We present a randomized controlled trial for a model-in-the-loop regression task, with the goal of measuring the extent to which (1) good explanations of model predictions increase human accuracy, and (2) faulty explanations decrease human trust in the model.

Decision Making Experimental Design

Compositional Explanations of Neurons

1 code implementation NeurIPS 2020 Jesse Mu, Jacob Andreas

We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior.

Image Classification Natural Language Inference

Experience Grounds Language

2 code implementations EMNLP 2020 Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph Turian

Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.

Representation Learning

A Benchmark for Systematic Generalization in Grounded Language Understanding

4 code implementations NeurIPS 2020 Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake

In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.

Systematic Generalization

A Survey of Reinforcement Learning Informed by Natural Language

no code implementations10 Jun 2019 Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, Tim Rocktäschel

To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand.

Decision Making Instruction Following +5

Good-Enough Compositional Data Augmentation

1 code implementation ACL 2020 Jacob Andreas

We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models.

Data Augmentation Inductive Bias +2

Guiding Policies with Language via Meta-Learning

1 code implementation ICLR 2019 John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine

However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.

Imitation Learning Instruction Following +1

Explainable Neural Computation via Stack Neural Module Networks

1 code implementation ECCV 2018 Ronghang Hu, Jacob Andreas, Trevor Darrell, Kate Saenko

In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction.

Decision Making Question Answering +1

Speaker-Follower Models for Vision-and-Language Navigation

1 code implementation NeurIPS 2018 Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell

We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.

Data Augmentation Vision and Language Navigation

Unified Pragmatic Models for Generating and Following Instructions

1 code implementation NAACL 2018 Daniel Fried, Jacob Andreas, Dan Klein

We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.

Text Generation

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?

1 code implementation ICML 2018 Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization.

reinforcement-learning Reinforcement Learning (RL)

Learning with Latent Language

1 code implementation NAACL 2018 Jacob Andreas, Dan Klein, Sergey Levine

The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world.

Image Classification Navigate

Analogs of Linguistic Structure in Deep Representations

3 code implementations EMNLP 2017 Jacob Andreas, Dan Klein

We investigate the compositional structure of message vectors computed by a deep network trained on a communication game.

Negation

A Minimal Span-Based Neural Constituency Parser

no code implementations ACL 2017 Mitchell Stern, Jacob Andreas, Dan Klein

In this work, we present a minimal neural model for constituency parsing based on independent scoring of labels and spans.

Constituency Parsing

Translating Neuralese

1 code implementation ACL 2017 Jacob Andreas, Anca Dragan, Dan Klein

Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel.

Machine Translation Translation

Modeling Relationships in Referential Expressions with Compositional Modular Networks

2 code implementations CVPR 2017 Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate Saenko

In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene.

Visual Question Answering (VQA)

Reasoning About Pragmatics with Neural Listeners and Speakers

1 code implementation EMNLP 2016 Jacob Andreas, Dan Klein

We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics.

Referring Expression Text Generation

Neural Module Networks

1 code implementation CVPR 2016 Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein

Visual question answering is fundamentally compositional in nature---a question like "where is the dog?"

Visual Question Answering

Alignment-based compositional semantics for instruction following

1 code implementation EMNLP 2015 Jacob Andreas, Dan Klein

This paper describes an alignment-based model for interpreting natural language instructions in context.

Instruction Following Sentence

On the accuracy of self-normalized log-linear models

no code implementations NeurIPS 2015 Jacob Andreas, Maxim Rabinovich, Dan Klein, Michael. I. Jordan

Calculation of the log-normalizer is a major computational obstacle in applications of log-linear models with large output spaces.

Generalization Bounds

Unsupervised Transcription of Piano Music

no code implementations NeurIPS 2014 Taylor Berg-Kirkpatrick, Jacob Andreas, Dan Klein

We present a new probabilistic model for transcribing piano music from audio to a symbolic form.

Annotating Agreement and Disagreement in Threaded Discussion

no code implementations LREC 2012 Jacob Andreas, Sara Rosenthal, Kathleen McKeown

We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.