Open-Domain Dialog

20 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Greatest papers with code

KILT: a Benchmark for Knowledge Intensive Language Tasks

facebookresearch/KILT NAACL 2021

We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.

Entity Linking Fact Checking +4

ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation

microsoft/ProphetNet 16 Apr 2021

ProphetNet is a pre-training based natural language generation method which shows powerful performance on English text summarization and question generation tasks.

Code Generation Open-Domain Dialog +3

ClovaCall: Korean Goal-Oriented Dialog Speech Corpus for Automatic Speech Recognition of Contact Centers

ClovaAI/ClovaCall 20 Apr 2020

Automatic speech recognition (ASR) via call is essential for various applications, including AI for contact center (AICC) services.

Goal-Oriented Dialog Open-Domain Dialog +1

Hierarchical Reinforcement Learning for Open-Domain Dialog

natashamjaques/neural_chat 17 Sep 2019

Open-domain dialog generation is a challenging problem; maximum likelihood training can lead to repetitive outputs, models have difficulty tracking long-term conversational goals, and training on standard movie or online datasets may lead to the generation of inappropriate, biased, or offensive text.

Hierarchical Reinforcement Learning Open-Domain Dialog

Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog

natashamjaques/neural_chat 30 Jun 2019

Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment.

Open-Domain Dialog Q-Learning

Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems

natashamjaques/neural_chat NeurIPS 2019

To investigate the strengths of this novel metric and interactive evaluation in comparison to state-of-the-art metrics and human evaluation of static conversations, we perform extended experiments with a set of models, including several that make novel improvements to recent hierarchical dialog generation architectures through sentiment and semantic knowledge distillation on the utterance level.

Knowledge Distillation Open-Domain Dialog +1

USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation

shikib/usr ACL 2020

The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research.

Open-Domain Dialog Text Generation

Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References

prakharguptaz/multirefeval WS 2019

The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation.

Open-Domain Dialog

RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems

thu-coai/OpenMEVA 11 Jan 2017

Open-domain human-computer conversation has been attracting increasing attention over the past few years.

Open-Domain Dialog