Dialogue Generation
230 papers with code • 14 benchmarks • 31 datasets
Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).
Libraries
Use these libraries to find Dialogue Generation models and implementationsDatasets
Latest papers with no code
Crafting a Good Prompt or Providing Exemplary Dialogues? A Study of In-Context Learning for Persona-based Dialogue Generation
Previous in-context learning (ICL) research has focused on tasks such as classification, machine translation, text2table, etc., while studies on whether ICL can improve human-like dialogue generation are scarce.
Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue
Tuning pretrained language models for dialogue generation has been a prevalent paradigm for building capable dialogue agents.
Investigating Content Planning for Navigating Trade-offs in Knowledge-Grounded Dialogue
Knowledge-grounded dialogue generation is a challenging task because it requires satisfying two fundamental yet often competing constraints: being responsive in a manner that is specific to what the conversation partner has said while also being attributable to an underlying source document.
Medical Dialogue Generation via Intuitive-then-Analytical Differential Diagnosis
Clinicians typically employ both intuitive and analytic reasoning to formulate a differential diagnosis.
Integrating Physician Diagnostic Logic into Large Language Models: Preference Learning from Process Feedback
The use of large language models in medical dialogue generation has garnered significant attention, with a focus on improving response quality and fluency.
OmniDialog: An Omnipotent Pre-training Model for Task-Oriented Dialogue System
Furthermore, to glean a nuanced understanding of OmniDialog's strengths and potential pitfalls, we designed a fine-grained analysis framework for dialogue-centric tasks.
A Survey of Text Watermarking in the Era of Large Language Models
Text watermarking algorithms play a crucial role in the copyright protection of textual content, yet their capabilities and application scenarios have been limited historically.
Enhancing Empathetic and Emotion Support Dialogue Generation with Prophetic Commonsense Inference
The interest in Empathetic and Emotional Support conversations among the public has significantly increased.
E-CORE: Emotion Correlation Enhanced Empathetic Dialogue Generation
Then we propose an emotion correlation enhanced decoder, with a novel correlation-aware aggregation and soft/hard strategy, respectively improving the emotion perception and response generation.
CMed-GPT: Prompt Tuning for Entity-Aware Chinese Medical Dialogue Generation
Medical dialogue generation relies on natural language generation techniques to enable online medical consultations.