Response Generation
280 papers with code • 3 benchmarks • 7 datasets
A task where an agent should play the $DE$ role and generate a text to respond to a $P$ message.
Libraries
Use these libraries to find Response Generation models and implementationsMost implemented papers
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation
However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting.
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
Variational autoencoders~(VAEs) have shown a promise in data-driven conversation modeling.
Response Generation by Context-aware Prototype Editing
Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses.
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
We present CoSQL, a corpus for building cross-domain, general-purpose database (DB) querying dialogue systems.
PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable
Pre-training models have been proved effective for a wide range of natural language processing tasks.
PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning
To build a high-quality open-domain chatbot, we introduce the effective training process of PLATO-2 via curriculum learning.
How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation
We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available.
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains.
Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss
Specifically, we first analyze the influence of the commonly used Cross-Entropy (CE) loss function, and find that the CE loss function prefers high-frequency tokens, which results in low-diversity responses.
Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for many natural language processing (NLP) tasks.