SUMBT+LaRL: Effective Multi-domain End-to-end Neural Task-oriented Dialog System

22 Sep 2020  ·  Hwaran Lee, Seokhwan Jo, HyungJun Kim, SangKeun Jung, Tae-Yoon Kim ·

The recent advent of neural approaches for developing each dialog component in task-oriented dialog systems has remarkably improved, yet optimizing the overall system performance remains a challenge. Besides, previous research on modeling complicated multi-domain goal-oriented dialogs in end-to-end fashion has been limited. In this paper, we present an effective multi-domain end-to-end trainable neural dialog system SUMBT+LaRL that incorporates two previous strong models and facilitates them to be fully differentiable. Specifically, the SUMBT+ estimates user-acts as well as dialog belief states, and the LaRL models latent system action spaces and generates responses given the estimated contexts. We emphasize that the training framework of three steps significantly and stably increase dialog success rates: separately pretraining the SUMBT+ and LaRL, fine-tuning the entire system, and then reinforcement learning of dialog policy. We also introduce new reward criteria of reinforcement learning for dialog policy training. Then, we discuss experimental results depending on the reward criteria and different dialog evaluation methods. Consequently, our model achieved the new state-of-the-art success rate of 85.4% on corpus-based evaluation, and a comparable success rate of 81.40% on simulator-based evaluation provided by the DSTC8 challenge. To our best knowledge, our work is the first comprehensive study of a modularized E2E multi-domain dialog system that learning from each component to the entire dialog policy for task success.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here