dialog state tracking
29 papers with code • 4 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in dialog state tracking
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Libraries
Use these libraries to find dialog state tracking models and implementationsMost implemented papers
Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data Augmentation
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks.
Non-Autoregressive Dialog State Tracking
Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself.
Schema-Guided Natural Language Generation
We train different state-of-the-art models for neural natural language generation on this dataset and show that in many cases, including rich schema information allows our models to produce higher quality outputs both in terms of semantics and diversity.
Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources
In this work, we present an effective method for semantic specialization of word vector representations.
A Fast and Robust BERT-based Dialogue State Tracker for Schema-Guided Dialogue Dataset
Dialog State Tracking (DST) is one of the most crucial modules for goal-oriented dialogue systems.
Conversational Semantic Parsing for Dialog State Tracking
We consider a new perspective on dialog state tracking (DST), the task of estimating a user's goal through the course of a dialog.
An Empirical Study of Cross-Lingual Transferability in Generative Dialogue State Tracker
There has been a rapid development in data-driven task-oriented dialogue systems with the benefit of large-scale datasets.
Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems
In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems.
DS-TOD: Efficient Domain Specialization for Task Oriented Dialog
Recent work has shown that self-supervised dialog-specific pretraining on large conversational datasets yields substantial gains over traditional language modeling (LM) pretraining in downstream task-oriented dialog (TOD).
Multimodal Interactions Using Pretrained Unimodal Models for SIMMC 2.0
This paper presents our work on the Situated Interactive MultiModal Conversations 2. 0 challenge held at Dialog State Tracking Challenge 10.