Search Results for author: Daniel Fried

Found 51 papers, 30 papers with code

Analyzing the Language of Food on Social Media

no code implementations8 Sep 2014 Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, Dane Bell

We investigate the predictive power behind the language of food on social media.

Incorporating Both Distributional and Relational Semantics in Word Representations

no code implementations14 Dec 2014 Daniel Fried, Kevin Duh

We investigate the hypothesis that word representations ought to incorporate both distributional and relational semantics.

Knowledge Base Completion

Incorporating Both Distributional and Relational Semantics in Word Representations

no code implementations18 Dec 2014 Daniel Fried, Kevin Duh

We investigate the hypothesis that word representations ought to incorporate both distributional and relational semantics.

Knowledge Base Completion

Higher-order Lexical Semantic Models for Non-factoid Answer Reranking

no code implementations TACL 2015 Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mihai Surdeanu, Peter Clark

We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations.

Open-Domain Question Answering Semantic Similarity +1

Effective Inference for Generative Neural Parsing

no code implementations EMNLP 2017 Mitchell Stern, Daniel Fried, Dan Klein

Generative neural models have recently achieved state-of-the-art results for constituency parsing.

Constituency Parsing

Unified Pragmatic Models for Generating and Following Instructions

1 code implementation NAACL 2018 Daniel Fried, Jacob Andreas, Dan Klein

We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.

Text Generation

Speaker-Follower Models for Vision-and-Language Navigation

1 code implementation NeurIPS 2018 Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell

We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.

Data Augmentation Vision and Language Navigation

Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing

no code implementations ACL 2018 Daniel Fried, Dan Klein

Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system.

Constituency Parsing

Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation

no code implementations ACL 2019 Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, Kate Saenko

The actual grounding can connect language to the environment through multiple modalities, e. g. "stop at the door" might ground into visual objects, while "turn right" might rely only on the geometric structure of a route.

Vision and Language Navigation

Cross-Domain Generalization of Neural Constituency Parsers

1 code implementation ACL 2019 Daniel Fried, Nikita Kitaev, Dan Klein

Neural parsers obtain state-of-the-art results on benchmark treebanks for constituency parsing -- but to what degree do they generalize to other domains?

Constituency Parsing Domain Generalization

Syntactic Structure Distillation Pretraining For Bidirectional Encoders

no code implementations27 May 2020 Adhiguna Kuncoro, Lingpeng Kong, Daniel Fried, Dani Yogatama, Laura Rimell, Chris Dyer, Phil Blunsom

Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic competence.

Knowledge Distillation Language Modelling +3

Reference-Centric Models for Grounded Collaborative Dialogue

1 code implementation EMNLP 2021 Daniel Fried, Justin T. Chiu, Dan Klein

We present a grounded neural dialogue model that successfully collaborates with people in a partially-observable reference game.

Inferring Rewards from Language in Context

1 code implementation ACL 2022 Jessy Lin, Daniel Fried, Dan Klein, Anca Dragan

In classic instruction following, language like "I'd like the JetBlue flight" maps to actions (e. g., selecting that flight).

Instruction Following Reinforcement Learning (RL)

InCoder: A Generative Model for Code Infilling and Synthesis

3 code implementations12 Apr 2022 Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis

Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming.

Code Generation Comment Generation +1

Natural Language to Code Translation with Execution

1 code implementation25 Apr 2022 Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang

In this work, we introduce execution result--based minimum Bayes risk decoding (MBR-EXEC) for program selection and show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks.

Code Translation Translation

Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

no code implementations24 Oct 2022 Maarten Sap, Ronan LeBras, Daniel Fried, Yejin Choi

We show that one of today's largest language models (GPT-3; Brown et al., 2020) lacks this kind of social intelligence out-of-the box, using two tasks: SocialIQa (Sap et al., 2019), which measures models' ability to understand intents and reactions of participants of social interactions, and ToMi (Le et al., 2019), which measures whether models can infer mental states and realities of participants of situations.

Navigate Open-Ended Question Answering

Contrastive Decoding: Open-ended Text Generation as Optimization

2 code implementations27 Oct 2022 Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis

We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint.

Language Modelling Text Generation

Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches

no code implementations15 Nov 2022 Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh

People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication.

Grounded language learning

DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation

1 code implementation18 Nov 2022 Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida Wang, Tao Yu

We introduce DS-1000, a code generation benchmark with a thousand data science problems spanning seven Python libraries, such as NumPy and Pandas.

Code Generation Memorization

G^3: Geolocation via Guidebook Grounding

1 code implementation28 Nov 2022 Grace Luo, Giscard Biamby, Trevor Darrell, Daniel Fried, Anna Rohrbach

We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game.

Coder Reviewer Reranking for Code Generation

1 code implementation29 Nov 2022 Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang

Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions.

Code Generation Language Modelling

Execution-Based Evaluation for Open-Domain Code Generation

1 code implementation20 Dec 2022 Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig

To extend the scope of coding queries to more realistic settings, we propose ODEX, the first Open-Domain EXecution-based natural language (NL) to Python code generation dataset.

Code Generation Memorization

Grounding Language Models to Images for Multimodal Inputs and Outputs

1 code implementation31 Jan 2023 Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried

We propose an efficient method to ground pretrained text-only language models to the visual domain, enabling them to process arbitrarily interleaved image-and-text data, and generate text interleaved with retrieved images.

Image Retrieval In-Context Learning +4

StarCoder: may the source be with you!

4 code implementations9 May 2023 Raymond Li, Loubna Ben allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries

The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention.

8k Code Generation

Generating Images with Multimodal Language Models

1 code implementation NeurIPS 2023 Jing Yu Koh, Daniel Fried, Ruslan Salakhutdinov

This mapping network translates hidden representations of text into the embedding space of the visual models, enabling us to leverage the strong text representations of the LLM for visual outputs.

Image Retrieval Retrieval +1

Pragmatic Inference with a CLIP Listener for Contrastive Captioning

1 code implementation15 Jun 2023 Jiefu Ou, Benno Krojer, Daniel Fried

We propose a simple yet effective and robust method for contrastive captioning: generating discriminative captions that distinguish target images from very similar alternative distractor images.

WebArena: A Realistic Web Environment for Building Autonomous Agents

1 code implementation25 Jul 2023 Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig

Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions.

Amortizing Pragmatic Program Synthesis with Rankings

1 code implementation1 Sep 2023 Yewen Pu, Saujas Vaduguru, Priyan Vaithilingam, Elena Glassman, Daniel Fried

We prove that for a pragmatic synthesizer that uses a single demonstration, our global ranking method exactly replicates RSA's ranked responses.

Program Synthesis

SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents

1 code implementation18 Oct 2023 Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap

We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence.

API-Assisted Code Generation for Question Answering on Varied Table Structures

no code implementations23 Oct 2023 Yihan Cao, Shuyi Chen, Ryan Liu, Zhiruo Wang, Daniel Fried

A persistent challenge to table question answering (TableQA) by generating executable programs has been adapting to varied table structures, typically requiring domain-specific logical forms.

Code Generation Question Answering

Data Augmentation for Code Translation with Comparable Corpora and Multiple References

1 code implementation1 Nov 2023 Yiqing Xie, Atharva Naik, Daniel Fried, Carolyn Rose

One major challenge of translating code between programming languages is that parallel training data is often limited.

Code Translation Data Augmentation +1

Comparative Knowledge Distillation

1 code implementation3 Nov 2023 Alex Wilf, Alex Tianyi Xu, Paul Pu Liang, Alexander Obolenskiy, Daniel Fried, Louis-Philippe Morency

We observe that prevalent KD techniques and state of the art data augmentation strategies fall short in this constrained setting.

Data Augmentation Knowledge Distillation

Generating Pragmatic Examples to Train Neural Program Synthesizers

1 code implementation9 Nov 2023 Saujas Vaduguru, Daniel Fried, Yewen Pu

Programming-by-example is the task of synthesizing a program that is consistent with a set of user-provided input-output examples.

counterfactual Counterfactual Reasoning +1

Asking More Informative Questions for Grounded Retrieval

no code implementations14 Nov 2023 Sedrick Keh, Justin T. Chiu, Daniel Fried

When a model is trying to gather information in an interactive setting, it benefits from asking informative questions.

Question Answering Question Selection +2

TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks

1 code implementation23 Jan 2024 Zhiruo Wang, Daniel Fried, Graham Neubig

Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs.

Math Question Answering

VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks

1 code implementation24 Jan 2024 Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, Daniel Fried

Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents.

Repetition Improves Language Model Embeddings

1 code implementation23 Feb 2024 Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, aditi raghunathan

In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.