Conversational Response Generation
13 papers with code • 0 benchmarks • 2 datasets
Given an input conversation, generate a natural-looking text reply to the last conversation element.
These leaderboards are used to track progress in Conversational Response Generation
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.
Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer).
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
Responses generated by neural conversational models tend to lack informativeness and diversity.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.
In this work, we propose a memory-augmented generative model, which learns to abstract from the training corpus and saves the useful information to the memory to assist the response generation.
In this paper, we address the problem of answering complex information needs by conversing conversations with search engines, in the sense that users can express their queries in natural language, and directly receivethe information they need from a short system response in a conversational manner.
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer).
However, generating personalized responses is still a challenging task since the leverage of predefined persona information is often insufficient.
Recent advances in pre-trained language models have significantly improved neural response generation.