Emotion Recognition in Conversation
72 papers with code • 12 benchmarks • 14 datasets
Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .
Libraries
Use these libraries to find Emotion Recognition in Conversation models and implementationsLatest papers
Distribution-based Emotion Recognition in Conversation
Automatic emotion recognition in conversation (ERC) is crucial for emotion-aware conversational artificial intelligence.
Supervised Prototypical Contrastive Learning for Emotion Recognition in Conversation
Capturing emotions within a conversation plays an essential role in modern dialogue systems.
GRASP: Guiding model with RelAtional Semantics using Prompt for Dialogue Relation Extraction
To effectively exploit inherent knowledge of PLMs without extra layers and consider scattered semantic cues on the relation between the arguments, we propose a Guiding model with RelAtional Semantics using Prompt (GRASP).
Contextual Information and Commonsense Based Prompt for Emotion Recognition in Conversation
The newly proposed ERC models have leveraged pre-trained language models (PLMs) with the paradigm of pre-training and fine-tuning to obtain good performance.
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion Detection
Multimodal Emotion Recognition in Conversation (ERC) plays an influential role in the field of human-computer interaction and conversational robotics since it can motivate machines to provide empathetic services.
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition
In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information.
The Emotion is Not One-hot Encoding: Learning with Grayscale Label for Emotion Recognition in Conversation
That is, instead of using a given label as a one-hot encoding, we construct a grayscale label by measuring scores for different emotions.
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Finally, we provide baseline systems for these tasks and consider the function of speakers' personalities and emotions on conversation.
EmotionFlow: Capture the Dialogue Level Emotion Transitions
However, the spread impact of emotions in a conversation is rarely addressed in existing researches.
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations
For multimodal ERC, it is vital to understand context and fuse modality information in conversations.