Response Generation

270 papers with code • 3 benchmarks • 7 datasets

A task where an agent should play the $DE$ role and generate a text to respond to a $P$ message.

Libraries

Use these libraries to find Response Generation models and implementations
3 papers
111
See all 5 libraries.

Most implemented papers

A Diversity-Promoting Objective Function for Neural Conversation Models

pender/chatbot-rnn NAACL 2016

Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.

Language Models are Unsupervised Multitask Learners

openai/gpt-2 Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

julianser/hed-dlg-truncated 19 May 2016

Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue.

Unified Language Model Pre-training for Natural Language Understanding and Generation

microsoft/unilm NeurIPS 2019

This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.

MASS: Masked Sequence to Sequence Pre-training for Language Generation

microsoft/MASS 7 May 2019

Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.

NLG Evaluation Metrics Beyond Correlation Analysis: An Empirical Metric Preference Checklist

inimah/metric-preference-checklist 15 May 2023

Our proposed framework provides access: (i) for verifying whether automatic metrics are faithful to human preference, regardless of their correlation level to human; and (ii) for inspecting the strengths and limitations of NLG systems via pairwise evaluation.

DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation

microsoft/DialoGPT 1 Nov 2019

We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer).

MultiWOZ -- A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling

jojonki/MultiWOZ-Parser EMNLP 2018

Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available.

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation

julianser/Ubuntu-Multiresolution-Tools 2 Jun 2016

We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens.

Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset

google-research-datasets/dstc8-schema-guided-dialogue 12 Sep 2019

In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains.