We present a new neural architecture for wide-coverage Natural Language Understanding in Spoken Dialogue Systems.
Using DSTC2 as seed data, we trained natural language understanding (NLU) and generation (NLG) networks for each agent and let the agents interact online.
The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation.
It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.
The multilingual BERT model is trained on 104 languages and meant to serve as a universal language model and tool for encoding sentences.