Conversational Response Selection
28 papers with code • 11 benchmarks • 10 datasets
Conversational response selection refers to the task of identifying the most relevant response to a given input sentence from a collection of sentences.
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words.
Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring
The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018).
General-purpose pretrained sentence encoders such as BERT are not ideal for real-world conversational AI applications; they are computationally heavy, slow, and expensive to train.
The noetic end-to-end response selection challenge as one track in Dialog System Technology Challenges 7 (DSTC7) aims to push the state of the art of utterance classification for real world goal-oriented dialog systems, for which participants need to select the correct next utterances from a set of candidates for the multi-turn context.
Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots
Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information.
Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context.