10 papers with code • 0 benchmarks • 0 datasets
Given an input conversation, generate a natural-looking text reply to the last conversation element.
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer).
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer).
Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
In this work, we propose a memory-augmented generative model, which learns to abstract from the training corpus and saves the useful information to the memory to assist the response generation.
Responses generated by neural conversational models tend to lack informativeness and diversity.
In this paper, we address the problem of answering complex information needs by conversing conversations with search engines, in the sense that users can express their queries in natural language, and directly receivethe information they need from a short system response in a conversational manner.
We use the evaluation framework to benchmark the widely used conversational DialoGPT model along with the adaptations of four debiasing methods.