Open-Domain Dialog
32 papers with code • 1 benchmarks • 11 datasets
Datasets
Most implemented papers
HERALD: An Annotation Efficient Method to Detect User Disengagement in Social Conversations
Open-domain dialog systems have a user-centric goal: to provide humans with an engaging conversation experience.
Improving Automated Evaluation of Open Domain Dialog via Diverse Reference Augmentation
Multiple different responses are often plausible for a given open domain dialog context.
GenSF: Simultaneous Adaptation of Generative Pre-trained Models and Slot Filling
We instead achieve strong alignment by simultaneously modifying both the pre-trained model and the formulation of the downstream task, which is more efficient and preserves the scalability of transfer learning.
Investigating Robustness of Dialog Models to Popular Figurative Language Constructs
Humans often employ figurative language use in communication, including during interactions with dialog systems.
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
The research of open-domain dialog systems has been greatly prospered by neural models trained on large-scale corpora, however, such corpora often introduce various safety problems (e. g., offensive languages, biases, and toxic behaviors) that significantly hinder the deployment of dialog systems in practice.
What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation
Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect.
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Finally, we provide baseline systems for these tasks and consider the function of speakers' personalities and emotions on conversation.
GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
We introduce GODEL (Grounded Open Dialogue Language Model), a large pre-trained language model for dialog.
Re2G: Retrieve, Rerank, Generate
As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger.