Search Results for author: Tsung-Hsien Wen

Found 29 papers, 9 papers with code

Training Neural Response Selection for Task-Oriented Dialogue Systems

1 code implementation ACL 2019 Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su

Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks.

Chatbot Language Modelling +1

MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling

1 code implementation EMNLP 2018 Pawe{\l} Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I{\~n}igo Casanueva, Stefan Ultes, Osman Ramadan, Milica Ga{\v{s}}i{\'c}

Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset is two-fold:firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided.

Decision Making Dialogue Management +4

MultiWOZ -- A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling

3 code implementations EMNLP 2018 Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, Milica Gašić

Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available.

Response Generation

Latent Topic Conversational Models

no code implementations ICLR 2018 Tsung-Hsien Wen, Minh-Thang Luong

In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier.

Response Generation

Latent Intention Dialogue Models

1 code implementation ICML 2017 Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young

Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research.

reinforcement-learning Variational Inference

Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

no code implementations NAACL 2016 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Steve Young

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains.

Domain Adaptation Spoken Dialogue Systems +1

Counter-fitting Word Vectors to Linguistic Constraints

2 code implementations NAACL 2016 Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.

Dialogue State Tracking Semantic Similarity +1

Learning from Real Users: Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems

no code implementations13 Aug 2015 Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, Steve Young

The models are trained on dialogues generated by a simulated user and the best model is then used to train a policy on-line which is shown to perform at least as well as a baseline system using prior knowledge of the user's task.

Spoken Dialogue Systems

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

no code implementations WS 2015 Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on.

Text Generation

Multi-domain Dialog State Tracking using Recurrent Neural Networks

no code implementations IJCNLP 2015 Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

Dialog state tracking is a key component of many modern dialog systems, most of which are designed with a single, well-defined domain in mind.

Cannot find the paper you are looking for? You can Submit a new open access paper.