Dialog state tracking (DST) suffers from severe data sparsity.
In this paper, we explore three ways of leveraging an auxiliary task to shape the latent variable distribution: via pre-training, to obtain an informed prior, and via multitask learning.
The ability to accurately track what happens during a conversation is essential for the performance of a dialogue system.
In this paper we present a new approach to DST which makes use of various copy mechanisms to fill slots with values.
Ranked #10 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1