Search Results for author: Bill Dolan

Found 47 papers, 27 papers with code

What Makes Good In-Context Examples for GPT-3?

no code implementations DeeLIO (ACL) 2022 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt.

In-Context Learning Natural Language Understanding +4

GRIM: GRaph-based Interactive narrative visualization for gaMes

no code implementations15 Nov 2023 Jorge Leandro, Sudha Rao, Michael Xu, Weijia Xu, Nebosja Jojic, Chris Brockett, Bill Dolan

\textbf{GRIM}, a prototype \textbf{GR}aph-based \textbf{I}nteractive narrative visualization system for ga\textbf{M}es, generates a rich narrative graph with branching storylines that match a high-level narrative description and constraints provided by the designer.

Investigating Agency of LLMs in Human-AI Collaboration Tasks

no code implementations22 May 2023 ASHISH SHARMA, Sudha Rao, Chris Brockett, Akanksha Malhotra, Nebojsa Jojic, Bill Dolan

While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration.

Interactive Text Generation

no code implementations2 Mar 2023 Felix Faltings, Michel Galley, Baolin Peng, Kianté Brantley, Weixin Cai, Yizhe Zhang, Jianfeng Gao, Bill Dolan

Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help.

Image Generation Imitation Learning +1

Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation

no code implementations4 Dec 2022 Faeze Brahman, Baolin Peng, Michel Galley, Sudha Rao, Bill Dolan, Snigdha Chaturvedi, Jianfeng Gao

We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages.

Data-to-Text Generation

Towards More Efficient Insertion Transformer with Fractional Positional Encoding

1 code implementation12 Dec 2021 Zhisong Zhang, Yizhe Zhang, Bill Dolan

Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly.

Text Generation

Automatic Document Sketching: Generating Drafts from Analogous Texts

no code implementations Findings (ACL) 2021 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Bill Dolan

The advent of large pre-trained language models has made it possible to make high-quality predictions on how to add or change a sentence in a document.

Reinforcement Learning (RL) Sentence +1

RetGen: A Joint framework for Retrieval and Grounded Text Generation Modeling

1 code implementation14 May 2021 Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal.

Dialogue Generation Language Modelling +1

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation

2 code implementations ACL 2022 Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan

Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.

Hallucination Sentence +1

An Adversarially-Learned Turing Test for Dialog Generation Models

1 code implementation16 Apr 2021 Xiang Gao, Yizhe Zhang, Michel Galley, Bill Dolan

To alleviate this risk, we propose an adversarial training approach to learn a robust model, ATT (Adversarial Turing Test), that discriminates machine-generated responses from human-written replies.

Dialogue Evaluation

What Makes Good In-Context Examples for GPT-$3$?

3 code implementations17 Jan 2021 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.

Few-Shot Learning Natural Language Understanding +4

Narrative Incoherence Detection

no code implementations21 Dec 2020 Deng Cai, Yizhe Zhang, Yichen Huang, Wai Lam, Bill Dolan

We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding: Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.

Sentence Sentence Embedding

Text Editing by Command

no code implementations NAACL 2021 Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan

A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.

Sentence Text Generation

Substance over Style: Document-Level Targeted Content Transfer

1 code implementation EMNLP 2020 Allison Hegel, Sudha Rao, Asli Celikyilmaz, Bill Dolan

Existing language models excel at writing from scratch, but many real-world scenarios require rewriting an existing document to fit a set of constraints.

Language Modelling Sentence +1

Contextualized Perturbation for Textual Adversarial Attack

1 code implementation NAACL 2021 Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan

Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.

Adversarial Attack Language Modelling

Reparameterized Variational Divergence Minimization for Stable Imitation

no code implementations18 Jun 2020 Dilip Arumugam, Debadeepta Dey, Alekh Agarwal, Asli Celikyilmaz, Elnaz Nouri, Bill Dolan

While recent state-of-the-art results for adversarial imitation-learning algorithms are encouraging, recent works exploring the imitation learning from observation (ILO) setting, where trajectories \textit{only} contain expert observations, have not been met with the same success.

Continuous Control Imitation Learning

A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks

1 code implementation ACL 2020 Angela S. Lin, Sudha Rao, Asli Celikyilmaz, Elnaz Nouri, Chris Brockett, Debadeepta Dey, Bill Dolan

Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions.

Descriptive

MixingBoard: a Knowledgeable Stylized Integrated Text Generation Platform

1 code implementation ACL 2020 Xiang Gao, Michel Galley, Bill Dolan

We present MixingBoard, a platform for quickly building demos with a focus on knowledge grounded stylized text generation.

Text Generation

POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-training

1 code implementation EMNLP 2020 Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, Bill Dolan

Large-scale pre-trained language models, such as BERT and GPT-2, have achieved excellent performance in language representation learning and free-form text generation.

Language Modelling Representation Learning +1

A Controllable Model of Grounded Response Generation

1 code implementation1 May 2020 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.

Informativeness Response Generation

Structuring Latent Spaces for Stylized Response Generation

1 code implementation IJCNLP 2019 Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan

This structure allows the system to generate stylized relevant responses by sampling in the neighborhood of the conversation model prediction, and continuously control the style level.

Response Generation Style Transfer

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

1 code implementation ACL 2019 Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao

Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous.

Informativeness Reading Comprehension +1

Consistent Dialogue Generation with Self-supervised Feature Learning

1 code implementation13 Mar 2019 Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents.

Dialogue Generation Response Generation

Jointly Optimizing Diversity and Relevance in Neural Response Generation

no code implementations NAACL 2019 Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

In this paper, we propose a SpaceFusion model to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms.

Dialogue Generation Response Generation

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

1 code implementation CVPR 2019 Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.

Imitation Learning Navigate +2

Multi-Task Learning for Speaker-Role Adaptation in Neural Conversation Models

no code implementations IJCNLP 2017 Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, Michel Galley

Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training.

Multi-Task Learning

A Knowledge-Grounded Neural Conversation Model

2 code implementations7 Feb 2017 Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, Michel Galley

We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.

Slot Filling

A Persona-Based Neural Conversation Model

1 code implementation ACL 2016 Jiwei Li, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, Bill Dolan

We present persona-based models for handling the issue of speaker consistency in neural response generation.

Response Generation

A Diversity-Promoting Objective Function for Neural Conversation Models

15 code implementations NAACL 2016 Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan

Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.

Conversational Response Generation Response Generation

deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets

no code implementations IJCNLP 2015 Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan

We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.