Time Masking: Leveraging Temporal Information in Spoken Dialogue Systems

WS 2019  ·  Rylan Conway, Lambert Mathias ·

In a spoken dialogue system, dialogue state tracker (DST) components track the state of the conversation by updating a distribution of values associated with each of the slots being tracked for the current user turn, using the interactions until then. Much of the previous work has relied on modeling the natural order of the conversation, using distance based offsets as an approximation of time. In this work, we hypothesize that leveraging the wall-clock temporal difference between turns is crucial for finer-grained control of dialogue scenarios. We develop a novel approach that applies a {\it time mask}, based on the wall-clock time difference, to the associated slot embeddings and empirically demonstrate that our proposed approach outperforms existing approaches that leverage distance offsets, on both an internal benchmark dataset as well as DSTC2.

PDF Abstract WS 2019 PDF WS 2019 Abstract

Datasets


Results from the Paper


Ranked #6 on Video Salient Object Detection on SegTrack v2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Salient Object Detection MCL TIMP S-Measure 0.642 # 6
MAX E-MEASURE 0.760 # 6
AVERAGE MAE 0.113 # 4
Video Salient Object Detection SegTrack v2 TIMP S-Measure 0.644 # 6
AVERAGE MAE 0.116 # 7
max E-measure 0.768 # 6

Methods


No methods listed for this paper. Add relevant methods here