Dialog Act Classification
11 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
A Latent Variable Recurrent Neural Network for Discourse Relation Language Models
This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences.
Optimizing Neural Network Hyperparameters with Gaussian Processes for Dialog Act Classification
Therefore it is a useful technique for tuning ANN models to yield the best performances for natural language processing tasks.
Probabilistic Word Association for Dialogue Act Classification with Recurrent Neural Networks
The identification of Dialogue Act’s (DA) is an important aspect in determining the meaning of an utterance for many applications that require natural language understanding, and recent work using recurrent neural networks (RNN) has shown promising results when applied to the DA classification problem.
Conversational Analysis using Utterance-level Attention-based Bidirectional Recurrent Neural Networks
Recent approaches for dialogue act recognition have shown that context from preceding utterances is important to classify the subsequent one.
Self-Governing Neural Networks for On-Device Short Text Classification
Deep neural networks reach state-of-the-art performance for wide range of natural language processing, computer vision and speech applications.
Privacy Guarantees for De-identifying Text Transformations
Machine Learning approaches to Natural Language Processing tasks benefit from a comprehensive collection of real-life user data.
Sentence encoding for Dialogue Act classification
In this study, we investigate the process of generating single-sentence representations for the purpose of Dialogue Act (DA) classification, including several aspects of text pre-processing and input representation which are often overlooked or underreported within the literature, for example, the number of words to keep in the vocabulary or input sequences.
DARER: Dual-task Temporal Relational Recurrent Reasoning Network for Joint Dialog Sentiment Classification and Act Recognition
To implement our framework, we propose a novel model dubbed DARER, which first generates the context-, speaker- and temporal-sensitive utterance representations via modeling SATG, then conducts recurrent dual-task relational reasoning on DRTG, in which process the estimated label distributions act as key clues in prediction-level interactions.
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Finally, we provide baseline systems for these tasks and consider the function of speakers' personalities and emotions on conversation.
TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.