Search Results for author: Milica Gasic

Found 21 papers, 4 papers with code

AgentGraph: Towards Universal Dialogue Management with Structured Deep Reinforcement Learning

no code implementations27 May 2019 Lu Chen, Zhi Chen, Bowen Tan, Sishan Long, Milica Gasic, Kai Yu

Experiments show that AgentGraph models significantly outperform traditional reinforcement learning approaches on most of the 18 tasks of the PyDial benchmark.

Deep Reinforcement Learning Dialogue Management +5

Deep learning for language understanding of mental health concepts derived from Cognitive Behavioural Therapy

1 code implementation WS 2018 Lina Rojas-Barahona, Bo-Hsiang Tseng, Yinpei Dai, Clare Mansfield, Osman Ramadan, Stefan Ultes, Michael Crawford, Milica Gasic

In recent years, we have seen deep learning and distributed representations of words and sentences make impact on a number of natural language processing tasks, such as similarity, entailment and sentiment analysis.

Deep Learning Sentence +3

Sample-efficient Actor-Critic Reinforcement Learning with Supervised Data for Dialogue Management

no code implementations WS 2017 Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, Steve Young

Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented.

Deep Reinforcement Learning Dialogue Management +3

Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

no code implementations NAACL 2016 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Steve Young

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains.

Domain Adaptation Spoken Dialogue Systems +1

Learning from Real Users: Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems

no code implementations13 Aug 2015 Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, Steve Young

The models are trained on dialogues generated by a simulated user and the best model is then used to train a policy on-line which is shown to perform at least as well as a baseline system using prior knowledge of the user's task.

Reinforcement Learning Spoken Dialogue Systems

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

no code implementations WS 2015 Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on.

Sentence Text Generation

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

2 code implementations EMNLP 2015 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.

Informativeness Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.