Global-Locally Self-Attentive Dialogue State Tracker

19 May 2018  ·  Victor Zhong, Caiming Xiong, Richard Socher ·

Dialogue state tracking, which estimates user goals and requests given the dialogue context, is an essential part of task-oriented dialogue systems. In this paper, we propose the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules. Our model uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. We show that this significantly improves tracking of rare states and achieves state-of-the-art performance on the WoZ and DSTC2 state tracking tasks. GLAD obtains 88.1% joint goal accuracy and 97.1% request accuracy on WoZ, outperforming prior work by 3.7% and 5.5%. On DSTC2, our model obtains 74.5% joint goal accuracy and 97.5% request accuracy, outperforming prior work by 1.1% and 1.0%.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-domain Dialogue State Tracking MULTIWOZ 2.0 GLAD Joint Acc 35.57 # 21
Dialogue State Tracking Second dialogue state tracking challenge Zhong et al. Request 97.5 # 1
Area - # 4
Food - # 4
Price - # 4
Joint 74.5 # 3
Dialogue State Tracking Wizard-of-Oz Zhong et al. Request 97.1 # 3
Joint 88.1 # 8


No methods listed for this paper. Add relevant methods here