Search Results for author: Tsung-Hsien Wen

Found 30 papers, 9 papers with code

$\textit{Dial BeInfo for Faithfulness}$: Improving Factuality of Information-Seeking Dialogue via Behavioural Fine-Tuning

no code implementations16 Nov 2023 Evgeniia Razumovskaia, Ivan Vulić, Pavle Marković, Tomasz Cichy, Qian Zheng, Tsung-Hsien Wen, Paweł Budzianowski

Factuality is a crucial requirement in information seeking dialogue: the system should respond to the user's queries so that the responses are meaningful and aligned with the knowledge provided to the system.

Training Neural Response Selection for Task-Oriented Dialogue Systems

1 code implementation ACL 2019 Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su

Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks.

Chatbot Language Modelling +2

MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling

1 code implementation EMNLP 2018 Pawe{\l} Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I{\~n}igo Casanueva, Stefan Ultes, Osman Ramadan, Milica Ga{\v{s}}i{\'c}

Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset is two-fold:firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided.

Decision Making Dialogue Management +4

MultiWOZ -- A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling

5 code implementations EMNLP 2018 Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, Milica Gašić

Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available.

Response Generation

Latent Topic Conversational Models

no code implementations ICLR 2018 Tsung-Hsien Wen, Minh-Thang Luong

In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier.

Response Generation Sentence

Latent Intention Dialogue Models

1 code implementation ICML 2017 Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young

Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research.

reinforcement-learning Reinforcement Learning +2

Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

no code implementations NAACL 2016 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Steve Young

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains.

Domain Adaptation Spoken Dialogue Systems +1

Counter-fitting Word Vectors to Linguistic Constraints

2 code implementations NAACL 2016 Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.

Dialogue State Tracking Semantic Similarity +1

Learning from Real Users: Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems

no code implementations13 Aug 2015 Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, Steve Young

The models are trained on dialogues generated by a simulated user and the best model is then used to train a policy on-line which is shown to perform at least as well as a baseline system using prior knowledge of the user's task.

Reinforcement Learning Spoken Dialogue Systems

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

2 code implementations EMNLP 2015 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.

Informativeness Sentence +2

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

no code implementations WS 2015 Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on.

Sentence Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.