Dialogue Generation
229 papers with code • 14 benchmarks • 31 datasets
Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).
Libraries
Use these libraries to find Dialogue Generation models and implementationsLatest papers
Mind the Gap Between Conversations for Improved Long-Term Dialogue Generation
Knowing how to end and resume conversations over time is a natural part of communication, allowing for discussions to span weeks, months, or years.
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation
In this paper, we address the hallucination problem commonly found in natural language generation tasks.
PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain
Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends.
MIRACLE: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control
Subsequently, we employ a conditional variational auto-encoder to align with the dense personalized responses within a latent joint attribute space.
We are what we repeatedly do: Inducing and deploying habitual schemas in persona-based responses
We capture such habitual knowledge using an explicit schema representation, and propose an approach to dialogue generation that retrieves relevant schemas to condition a large language model to generate persona-based responses.
Improving Medical Dialogue Generation with Abstract Meaning Representations
In this paper, We propose a novel framework that models dialogues between patients and healthcare professionals using AMR graphs, where the neural networks incorporate textual and graphical knowledge with a dual attention mechanism.
Promoting Open-domain Dialogue Generation through Learning Pattern Information between Contexts and Responses
In this paper, we first build an open-domain dialogue model based on the pre-trained language model (i. e., GPT-2).
Dataflow Dialogue Generation
We demonstrate task-oriented dialogue generation within the dataflow dialogue paradigm.
ZRIGF: An Innovative Multimodal Framework for Zero-Resource Image-Grounded Dialogue Generation
To overcome this challenge, we propose an innovative multimodal framework, called ZRIGF, which assimilates image-grounded information for dialogue generation in zero-resource situations.
DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering
Existing evaluation metrics for natural language generation (NLG) tasks face the challenges on generalization ability and interpretability.