no code implementations • ACL 2022 • Anton Belyy, Chieh-Yang Huang, Jacob Andreas, Emmanouil Antonios Platanios, Sam Thomson, Richard Shin, Subhro Roy, Aleksandr Nisnevich, Charles Chen, Benjamin Van Durme
Collecting data for conversational semantic parsing is a time-consuming and demanding process.
no code implementations • EMNLP 2021 • D. Anthony Bau, Jacob Andreas
After a neural sequence model encounters an unexpected token, can its behavior be predicted?
1 code implementation • 31 May 2023 • Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner
In each of these settings, AI assistants and users have disparate abilities that they must combine to arrive at the best decision: assistants can access and process large amounts of information, while users have preferences and constraints external to the system.
no code implementations • 30 May 2023 • Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning
When analyzing the relationship between model-internal properties and grokking, we find that optimal depth for grokking can be identified using the tree-structuredness metric of \citet{murty2023projections}.
no code implementations • 15 May 2023 • Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Ben Van Durme
We introduce an approach for equipping a simple language-to-code model to handle complex utterances via a process of hierarchical natural language decomposition.
1 code implementation • 3 Apr 2023 • Evan Hernandez, Belinda Z. Li, Jacob Andreas
Neural language models (LMs) represent facts about the world described by text.
no code implementations • 28 Mar 2023 • Eric Chu, Jacob Andreas, Stephen Ansolabehere, Deb Roy
Public opinion reflects and shapes societal behavior, but the traditional survey-based tools to measure it are limited.
no code implementations • 13 Feb 2023 • Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas
Reinforcement learning algorithms typically struggle in the absence of a dense, well-shaped reward function.
no code implementations • 3 Feb 2023 • Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas
Language models trained on large text corpora encode rich distributional information about real-world environments and action sequences.
no code implementations • 20 Dec 2022 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.
1 code implementation • 19 Dec 2022 • Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang
Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution.
no code implementations • 3 Dec 2022 • Jacob Andreas
Language models (LMs) are trained on collections of documents, written by individual human agents to achieve specific goals in an outside world.
no code implementations • 28 Nov 2022 • Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context.
1 code implementation • 15 Nov 2022 • Bailin Wang, Ivan Titov, Jacob Andreas, Yoon Kim
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
no code implementations • 2 Nov 2022 • Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning
To evaluate this possibility, we describe an unsupervised and parameter-free method to \emph{functionally project} the behavior of any transformer into the space of tree-structured networks.
no code implementations • 20 Oct 2022 • Alex Gu, Tamara Mitrovska, Daniela Velez, Jacob Andreas, Armando Solar-Lezama
We introduce ObSynth, an interactive system leveraging the domain knowledge embedded in large language models (LLMs) to help users design object models from high level natural language prompts.
no code implementations • 16 Sep 2022 • Hao Fang, Anusha Balakrishnan, Harsh Jhamtani, John Bufe, Jean Crawford, Jayant Krishnamurthy, Adam Pauls, Jason Eisner, Jacob Andreas, Dan Klein
Satisfying these constraints simultaneously is difficult for the two predominant paradigms in language generation: neural language modeling and rule-based generation.
1 code implementation • 23 May 2022 • Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu
In this paper, we propose the problem of fact tracing: identifying which training examples taught an LM to generate a particular factual assertion.
1 code implementation • 11 May 2022 • Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan
Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.
no code implementations • 11 Apr 2022 • Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, Dieter Fox
In this paper, we explore natural language as an expressive and flexible tool for robot correction.
1 code implementation • NeurIPS 2021 • Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, Abhishek Gupta
Training automated agents to complete complex tasks in interactive environments is challenging: reinforcement learning requires careful hand-engineering of reward functions, imitation learning requires specialized infrastructure and access to a human expert, and learning from intermediate forms of supervision (like binary preferences) is time-consuming and extracts little information from each human intervention.
1 code implementation • 3 Feb 2022 • Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, Yuke Zhu
Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but also goals and plans; these representations can aid learning and generalization even outside of language processing.
1 code implementation • 30 Jan 2022 • Ekin Akyürek, Jacob Andreas
Standard deep network models lack the inductive biases needed to generalize compositionally in tasks like semantic parsing, translation, and question answering.
1 code implementation • 26 Jan 2022 • Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas
Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.
no code implementations • 14 Dec 2021 • Athul Paul Jacob, David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Jacob Andreas, Noam Brown
We consider the task of building strong but human-like policies in multi-agent decision-making problems, given examples of human behavior.
1 code implementation • NAACL 2022 • Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model?
no code implementations • 4 Nov 2021 • Anthony Bau, Jacob Andreas
After a neural sequence model encounters an unexpected token, can its behavior be predicted?
1 code implementation • ICLR 2022 • Afra Feyza Akyürek, Ekin Akyürek, Derry Tanti Wijaya, Jacob Andreas
The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
class-incremental learning
Few-Shot Class-Incremental Learning
+3
1 code implementation • ICCV 2021 • Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba
A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images.
no code implementations • ACL 2022 • Pratyusha Sharma, Antonio Torralba, Jacob Andreas
We evaluate this approach in the ALFRED household simulation environment, providing natural language annotations for only 10% of demonstrations.
no code implementations • ICLR 2022 • Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas
Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.
no code implementations • 29 Sep 2021 • Shuang Li, Xavier Puig, Yilun Du, Ekin Akyürek, Antonio Torralba, Jacob Andreas, Igor Mordatch
Additional experiments explore the role of language-based encodings in these results; we find that it is possible to train a simple adapter layer that maps from observations and action histories to LM embeddings, and thus that language modeling provides an effective initializer even for tasks with no language as input or output.
1 code implementation • ACL 2021 • Ekin Akyurek, Jacob Andreas
Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following.
no code implementations • ACL 2021 • Emmanouil Antonios Platanios, Adam Pauls, Subhro Roy, Yuchen Zhang, Alexander Kyte, Alan Guo, Sam Thomson, Jayant Krishnamurthy, Jason Wolfe, Jacob Andreas, Dan Klein
Conversational semantic parsers map user utterances to executable programs given dialogue histories composed of previous utterances, programs, and system responses.
1 code implementation • 13 Jul 2021 • Eden Bensaid, Mauro Martino, Benjamin Hoover, Jacob Andreas, Hendrik Strobelt
Natural language generation (NLG) for storytelling is especially challenging because it requires the generated text to follow an overall theme while remaining creative and diverse to engage the reader.
no code implementations • 18 Jun 2021 • Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas
Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems.
1 code implementation • ACL 2021 • Joe O'Connor, Jacob Andreas
Transformer-based language models benefit from conditioning on contexts of hundreds to thousands of previous tokens.
1 code implementation • 7 Jun 2021 • Ekin Akyürek, Jacob Andreas
Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following.
no code implementations • NAACL 2021 • Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.
1 code implementation • ACL 2021 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?
no code implementations • CoNLL (EMNLP) 2021 • Evan Hernandez, Jacob Andreas
We show that a variety of linguistic features (including structured dependency relationships) are encoded in low-dimensional subspaces.
no code implementations • 17 Apr 2021 • Jacob Andreas, Gašper Beguš, Michael M. Bronstein, Roee Diamant, Denley Delaney, Shane Gero, Shafi Goldwasser, David F. Gruber, Sarah de Haas, Peter Malkin, Roger Payne, Giovanni Petri, Daniela Rus, Pratyusha Sharma, Dan Tchernov, Pernille Tønnesen, Antonio Torralba, Daniel Vogt, Robert J. Wood
We posit that machine learning will be the cornerstone of future collection, processing, and analysis of multimodal streams of data in animal communication studies, including bioacoustic, behavioral, biological, and environmental data.
no code implementations • NAACL 2021 • Athul Paul Jacob, Mike Lewis, Jacob Andreas
When intelligent agents communicate to accomplish shared goals, how do these goals shape the agents' language?
no code implementations • ICLR 2021 • Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama
In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next.
1 code implementation • ICLR 2021 • Ekin Akyürek, Afra Feyza Akyürek, Jacob Andreas
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of tasks.
1 code implementation • 24 Sep 2020 • Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
1 code implementation • 22 Aug 2020 • Geeticka Chauhan, Ruizhi Liao, William Wells, Jacob Andreas, Xin Wang, Seth Berkowitz, Steven Horng, Peter Szolovits, Polina Golland
To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time.
no code implementations • 23 Jul 2020 • Eric Chu, Deb Roy, Jacob Andreas
We present a randomized controlled trial for a model-in-the-loop regression task, with the goal of measuring the extent to which (1) good explanations of model predictions increase human accuracy, and (2) faulty explanations decrease human trust in the model.
1 code implementation • NeurIPS 2020 • Jesse Mu, Jacob Andreas
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior.
1 code implementation • 28 Apr 2020 • Alana Marzoev, Samuel Madden, M. Frans Kaashoek, Michael Cafarella, Jacob Andreas
Large, human-annotated datasets are central to the development of natural language processing models.
2 code implementations • EMNLP 2020 • Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph Turian
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
4 code implementations • NeurIPS 2020 • Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake
In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.
no code implementations • 10 Jun 2019 • Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, Tim Rocktäschel
To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand.
1 code implementation • ACL 2020 • Jacob Andreas
We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models.
2 code implementations • NAACL 2019 • Sheng Shen, Daniel Fried, Jacob Andreas, Dan Klein
We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.
Ranked #1 on
Data-to-Text Generation
on E2E NLG Challenge
Abstractive Text Summarization
Conditional Text Generation
+3
1 code implementation • ICLR 2019 • Jacob Andreas
Many machine learning algorithms represent input data with vector embeddings or discrete codes.
1 code implementation • ICLR 2019 • John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine
However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.
1 code implementation • ECCV 2018 • Ronghang Hu, Jacob Andreas, Trevor Darrell, Kate Saenko
In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction.
Ranked #13 on
Referring Expression Comprehension
on Talk2Car
1 code implementation • NeurIPS 2018 • Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.
1 code implementation • NAACL 2018 • Daniel Fried, Jacob Andreas, Dan Klein
We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.
1 code implementation • ICML 2018 • Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg
Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization.
1 code implementation • NAACL 2018 • Jacob Andreas, Dan Klein, Sergey Levine
The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world.
3 code implementations • EMNLP 2017 • Jacob Andreas, Dan Klein
We investigate the compositional structure of message vectors computed by a deep network trained on a communication game.
no code implementations • ACL 2017 • Mitchell Stern, Jacob Andreas, Dan Klein
In this work, we present a minimal neural model for constituency parsing based on independent scoring of labels and spans.
1 code implementation • ACL 2017 • Jacob Andreas, Anca Dragan, Dan Klein
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel.
1 code implementation • ICCV 2017 • Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Kate Saenko
Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems.
Ranked #45 on
Visual Question Answering (VQA)
on VQA v2 test-dev
2 code implementations • CVPR 2017 • Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate Saenko
In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene.
2 code implementations • ICML 2017 • Jacob Andreas, Dan Klein, Sergey Levine
We describe a framework for multitask deep reinforcement learning guided by policy sketches.
1 code implementation • EMNLP 2016 • Jacob Andreas, Dan Klein
We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics.
3 code implementations • NAACL 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
We describe a question answering model that applies to both images and structured knowledge bases.
1 code implementation • CVPR 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
Visual question answering is fundamentally compositional in nature---a question like "where is the dog?"
Ranked #6 on
Visual Question Answering (VQA)
on VQA v1 test-std
1 code implementation • EMNLP 2015 • Jacob Andreas, Dan Klein
This paper describes an alignment-based model for interpreting natural language instructions in context.
no code implementations • NeurIPS 2015 • Jacob Andreas, Maxim Rabinovich, Dan Klein, Michael. I. Jordan
Calculation of the log-normalizer is a major computational obstacle in applications of log-linear models with large output spaces.
no code implementations • NeurIPS 2014 • Taylor Berg-Kirkpatrick, Jacob Andreas, Dan Klein
We present a new probabilistic model for transcribing piano music from audio to a symbolic form.
no code implementations • LREC 2012 • Jacob Andreas, Sara Rosenthal, Kathleen McKeown
We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads.