We present a new neural architecture for wide-coverage Natural Language Understanding in Spoken Dialogue Systems.
Plato has been designed to be easy to understand and debug and is agnostic to the underlying learning frameworks that train each component.
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems.
#4 best model for Data-to-Text Generation on E2E NLG Challenge
We present a novel natural language generation system for spoken dialogue systems capable of entraining (adapting) to users' way of speaking, providing contextually appropriate responses.
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.
Spoken language understanding (SLU) is an essential component in conversational systems.
Cross-domain natural language generation (NLG) is still a difficult task within spoken dialogue modelling.