Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
As a key component in a dialogue system, dialogue state tracking plays an important role.
We show that data augmentation through synthesized data can improve the accuracy of zero-shot learning for both the TRADE model and the BERT-based SUMBT model on the MultiWOZ 2. 1 dataset.
Building an end-to-end conversational agent for multi-domain task-oriented dialogue has been an open challenge for two main reasons.
In task-oriented dialogue systems, Dialogue State Tracking (DST) is a core component, responsible for tracking users' goals over the whole course of a conversation, which then are utilized for deciding the next action to take.
In this paper, we propose using machine reading comprehension (RC) in state tracking from two perspectives: model architectures and datasets.
In this paper, a novel context and schema fusion network is proposed to encode the dialogue context and schema graph by using internal and external attention mechanisms.
While several state-of-the-art approaches to dialogue state tracking (DST) have shown promising performances on several benchmarks, there is still a significant performance gap between seen slot values (i. e., values that occur in both training set and test set) and unseen ones (values that occur in training set but not in test set).
As a baseline approach, we trained task-specific Statistical Language Models (SLM) and fine-tuned state-of-the-art Generalized Pre-training (GPT) Language Model to re-rank the n-best ASR hypotheses, followed by a model to identify the dialog act and slots.