Dialogue Generation

147 papers with code • 12 benchmarks • 23 datasets

Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).


Use these libraries to find Dialogue Generation models and implementations

Most implemented papers

Neural Machine Translation by Jointly Learning to Align and Translate

graykode/nlp-tutorial 1 Sep 2014

Neural machine translation is a recently proposed approach to machine translation.

TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents

huggingface/transfer-learning-conv-ai 23 Jan 2019

We introduce a new approach to generative data-driven dialogue systems (e. g. chatbots) called TransferTransfo which is a combination of a Transfer learning based training scheme and a high-capacity Transformer model.

Personalizing Dialogue Agents: I have a dog, do you have pets too?

facebookresearch/ParlAI ACL 2018

Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating.

Deep Reinforcement Learning for Dialogue Generation

liuyuemaicha/Deep-Reinforcement-Learning-for-Dialogue-Generation-in-tensorflow EMNLP 2016

Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes.

Adversarial Learning for Neural Dialogue Generation

liuyuemaicha/Adversarial-Learning-for-Neural-Dialogue-Generation-in-Tensorflow EMNLP 2017

In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances.

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

declare-lab/MELD ACL 2019

We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations.

Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset

facebookresearch/EmpatheticDialogues ACL 2019

One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill.

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation

julianser/Ubuntu-Multiresolution-Tools 2 Jun 2016

We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens.

ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation

PaddlePaddle/ERNIE 26 Jan 2020

Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.

Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation

Maluuba/nlg-eval ICLR 2018

However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting.