1 code implementation • NAACL (Wordplay) 2022 • Ryan Volum, Sudha Rao, Michael Xu, Gabriel DesGarennes, Chris Brockett, Benjamin Van Durme, Olivia Deng, Akanksha Malhotra, Bill Dolan
In this work, we demonstrate that use of a few example conversational prompts can power a conversational agent to generate both natural language and novel code.
no code implementations • 25 Apr 2024 • Xiangyu Peng, Jessica Quaye, Weijia Xu, Portia Botchway, Chris Brockett, Bill Dolan, Nebojsa Jojic, Gabriel DesGarennes, Ken Lobb, Michael Xu, Jorge Leandro, Claire Jin, Sudha Rao
We explore how interaction with large language models (LLMs) can give rise to emergent behaviors, empowering players to participate in the evolution of game narratives.
no code implementations • 15 Nov 2023 • Jorge Leandro, Sudha Rao, Michael Xu, Weijia Xu, Nebosja Jojic, Chris Brockett, Bill Dolan
\textbf{GRIM}, a prototype \textbf{GR}aph-based \textbf{I}nteractive narrative visualization system for ga\textbf{M}es, generates a rich narrative graph with branching storylines that match a high-level narrative description and constraints provided by the designer.
no code implementations • 22 May 2023 • ASHISH SHARMA, Sudha Rao, Chris Brockett, Akanksha Malhotra, Nebojsa Jojic, Bill Dolan
While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration.
1 code implementation • 22 Jun 2022 • Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao
We introduce GODEL (Grounded Open Dialogue Language Model), a large pre-trained language model for dialog.
no code implementations • Findings (ACL) 2021 • Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Bill Dolan
The advent of large pre-trained language models has made it possible to make high-quality predictions on how to add or change a sentence in a document.
1 code implementation • 14 May 2021 • Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan
We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal.
2 code implementations • ACL 2022 • Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan
Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.
no code implementations • NAACL 2021 • Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
1 code implementation • NAACL 2021 • Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.
2 code implementations • EMNLP 2020 • Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brockett, Bill Dolan
Particularly, our ranker outperforms the conventional dialog perplexity baseline with a large margin on predicting Reddit feedback.
1 code implementation • ACL 2020 • Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer).
1 code implementation • ACL 2020 • Angela S. Lin, Sudha Rao, Asli Celikyilmaz, Elnaz Nouri, Chris Brockett, Debadeepta Dey, Bill Dolan
Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions.
1 code implementation • 1 May 2020 • Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.
1 code implementation • EMNLP 2020 • Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, Bill Dolan
Large-scale pre-trained language models, such as BERT and GPT-2, have achieved excellent performance in language representation learning and free-form text generation.
6 code implementations • 1 Nov 2019 • Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer).
no code implementations • WS 2019 • Woon Sang Cho, Yizhe Zhang, Sudha Rao, Chris Brockett, Sungjin Lee
A preliminary step towards this goal is to generate a question that captures common concepts of multiple documents.
1 code implementation • IJCNLP 2019 • Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan
This structure allows the system to generate stylized relevant responses by sampling in the neighborhood of the conversation model prediction, and continuously control the style level.
1 code implementation • IJCNLP 2019 • Dianqi Li, Yizhe Zhang, Zhe Gan, Yu Cheng, Chris Brockett, Ming-Ting Sun, Bill Dolan
These data may demonstrate domain shift, which impedes the benefits of utilizing such data for training.
no code implementations • ACL 2019 • Vighnesh Leonardo Shiv, Chris Quirk, Anshuman Suri, Xiang Gao, Khuram Shahid, Nithya Govindarajan, Yizhe Zhang, Jianfeng Gao, Michel Galley, Chris Brockett, Tulasi Menon, Bill Dolan
The Intelligent Conversation Engine: Code and Pre-trained Systems (Microsoft Icecaps) is an upcoming open-source natural language processing repository.
1 code implementation • ACL 2019 • Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao
Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous.
1 code implementation • 13 Mar 2019 • Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan
Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents.
no code implementations • NAACL 2019 • Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan
In this paper, we propose a SpaceFusion model to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms.
Ranked #1 on Dialogue Generation on Reddit (multi-ref)
no code implementations • 11 Jan 2019 • Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D'Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda Alamari, Tim K. Marks, Devi Parikh, Dhruv Batra
This paper introduces the Seventh Dialog System Technology Challenges (DSTC), which use shared datasets to explore the problem of building dialog systems.
1 code implementation • CVPR 2019 • Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan
We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.
no code implementations • WS 2019 • Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, Jianfeng Gao
Generating coherent and cohesive long-form texts is a challenging task.
4 code implementations • NeurIPS 2018 • Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, Bill Dolan
Responses generated by neural conversational models tend to lack informativeness and diversity.
no code implementations • IJCNLP 2017 • Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, Michel Galley
Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training.
1 code implementation • EMNLP 2017 • Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg
We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation.
2 code implementations • 7 Feb 2017 • Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, Michel Galley
We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.
no code implementations • IJCNLP 2017 • Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P. Spithourakis, Lucy Vanderwende
The popularity of image sharing on social media and the engagement it creates between users reflects the important role that visual context plays in everyday conversations.
no code implementations • 22 Jun 2016 • Abhay Prakash, Chris Brockett, Puneet Agrawal
Conversational agents ("bots") are beginning to be widely used in conversational interfaces.
1 code implementation • ACL 2016 • Jiwei Li, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, Bill Dolan
We present persona-based models for handling the issue of speaker consistency in neural response generation.
15 code implementations • NAACL 2016 • Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.
no code implementations • IJCNLP 2015 • Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan
We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs.
no code implementations • HLT 2015 • Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan
We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations.