Conversational Response Selection

28 papers with code • 11 benchmarks • 10 datasets

Conversational response selection refers to the task of identifying the most relevant response to a given input sentence from a collection of sentences.

Libraries

Use these libraries to find Conversational Response Selection models and implementations

Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Deep contextualized word representations

flairNLP/flair NAACL 2018

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

Universal Sentence Encoder

facebookresearch/InferSent 29 Mar 2018

For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.

The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems

npow/ubottu WS 2015

This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words.

Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring

sfzhou5678/PolyEncoder 22 Apr 2019

The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018).

A Repository of Conversational Datasets

PolyAI-LDN/conversational-datasets WS 2019

Progress in Machine Learning is often driven by the availability of large datasets, and consistent evaluation metrics for comparing modeling approaches.

ConveRT: Efficient and Accurate Conversational Representations from Transformers

golsun/dialogrpt Findings of the Association for Computational Linguistics 2020

General-purpose pretrained sentence encoders such as BERT are not ideal for real-world conversational AI applications; they are computationally heavy, slow, and expensive to train.

Sequential Attention-based Network for Noetic End-to-End Response Selection

alibaba/esim-response-selection 9 Jan 2019

The noetic end-to-end response selection challenge as one track in Dialog System Technology Challenges 7 (DSTC7) aims to push the state of the art of utterance classification for real world goal-oriented dialog systems, for which participants need to select the correct next utterances from a set of candidates for the multi-turn context.

Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots

MarkWuNLP/MultiTurnResponseSelection ACL 2017

Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information.

Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network

baidu/Dialogue ACL 2018

Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context.