3 code implementations • WS 2019 • Matthew Henderson, Paweł Budzianowski, Iñigo Casanueva, Sam Coope, Daniela Gerz, Girish Kumar, Nikola Mrkšić, Georgios Spithourakis, Pei-Hao Su, Ivan Vulić, Tsung-Hsien Wen
Progress in Machine Learning is often driven by the availability of large datasets, and consistent evaluation metrics for comparing modeling approaches.
BIG-bench Machine Learning Conversational Response Selection +1
1 code implementation • ACL 2020 • Sam Coope, Tyler Farghly, Daniela Gerz, Ivan Vulić, Matthew Henderson
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task.
1 code implementation • ACL 2019 • Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su
Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks.
1 code implementation • ACL 2018 • Andrej {\v{Z}}ukov-Gregori{\v{c}}, Yoram Bachrach, Sam Coope
We present a new architecture for named entity recognition.
no code implementations • 5 Jul 2017 • Yoram Bachrach, Andrej Zukov-Gregoric, Sam Coope, Ed Tovell, Bogdan Maksak, Jose Rodriguez, Conan McMurtie
We propose a new attention mechanism for neural based question answering, which depends on varying granularities of the input.
no code implementations • ICLR 2018 • Sam Coope, Andrej Zukov-Gregoric, Yoram Bachrach
We propose a neural clustering model that jointly learns both latent features and how they cluster.
no code implementations • IJCNLP 2019 • Matthew Henderson, Ivan Vulić, Iñigo Casanueva, Paweł Budzianowski, Daniela Gerz, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su
We present PolyResponse, a conversational search engine that supports task-oriented dialogue.
no code implementations • EMNLP 2021 • Ivan Vulić, Pei-Hao Su, Sam Coope, Daniela Gerz, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, Tsung-Hsien Wen
Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge.