Open-Domain Dialog

31 papers with code • 1 benchmarks • 10 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

KILT: a Benchmark for Knowledge Intensive Language Tasks

facebookresearch/KILT NAACL 2021

We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.

Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems

natashamjaques/neural_chat NeurIPS 2019

To investigate the strengths of this novel metric and interactive evaluation in comparison to state-of-the-art metrics and human evaluation of static conversations, we perform extended experiments with a set of models, including several that make novel improvements to recent hierarchical dialog generation architectures through sentiment and semantic knowledge distillation on the utterance level.

Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References

prakharguptaz/multirefeval WS 2019

The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation.

Hurdles to Progress in Long-form Question Answering

martiansideofthemoon/hurdles-longform-qa NAACL 2021

The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer.

RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems

thu-coai/OpenMEVA 11 Jan 2017

Open-domain human-computer conversation has been attracting increasing attention over the past few years.

Augmenting Neural Response Generation with Context-Aware Topical Attention

nouhadziri/THRED WS 2019

Our model is built upon the basic Seq2Seq model by augmenting it with a hierarchical joint attention mechanism that incorporates topical concepts and previous interactions into the response generation.

Evaluating Coherence in Dialogue Systems using Entailment

nouhadziri/DialogEntailment NAACL 2019

Evaluating open-domain dialogue systems is difficult due to the diversity of possible correct answers.

Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog

natashamjaques/neural_chat 30 Jun 2019

Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment.

Large-Scale Transfer Learning for Natural Language Generation

atselousov/transformer_chatbot_experiments ACL 2019

Large-scale pretrained language models define state of the art in natural language processing, achieving outstanding performance on a variety of tasks.

A Multi-Turn Emotionally Engaging Dialog Model

yuboxie/meed 15 Aug 2019

Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing.