Multi-Domain Dialogue State Tracking based on State Graph

21 Oct 2020  ·  Yan Zeng, Jian-Yun Nie ·

We investigate the problem of multi-domain Dialogue State Tracking (DST) with open vocabulary, which aims to extract the state from the dialogue. Existing approaches usually concatenate previous dialogue state with dialogue history as the input to a bi-directional Transformer encoder. They rely on the self-attention mechanism of Transformer to connect tokens in them. However, attention may be paid to spurious connections, leading to wrong inference. In this paper, we propose to construct a dialogue state graph in which domains, slots and values from the previous dialogue state are connected properly. Through training, the graph node and edge embeddings can encode co-occurrence relations between domain-domain, slot-slot and domain-slot, reflecting the strong transition paths in general dialogue. The state graph, encoded with relational-GCN, is fused into the Transformer encoder. Experimental results show that our approach achieves a new state of the art on the task while remaining efficient. It outperforms existing open-vocabulary DST approaches.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-domain Dialogue State Tracking MULTIWOZ 2.0 Graph-DST Joint Acc 52.78 # 5
Multi-domain Dialogue State Tracking MULTIWOZ 2.1 Graph-DST Joint Acc 53.85 # 14

Methods