Emotion Recognition in Conversation
72 papers with code • 12 benchmarks • 14 datasets
Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .
Libraries
Use these libraries to find Emotion Recognition in Conversation models and implementationsLatest papers
FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion Recognition
This paper concentrates on the understanding of interlocutors' emotions evoked in conversational utterances.
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations
With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning.
Mimicking the Thinking Process for Emotion Recognition in Conversation with Prompts and Paraphrasing
It is a challenging task since the recognition of the emotion in one utterance involves many complex factors, such as the conversational context, the speaker's background, and the subtle difference between emotion labels.
Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations
To address this, we propose a supervised adversarial contrastive learning (SACL) framework for learning class-spread structured representations in a supervised manner.
Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
In this paper, we propose Speech-text dialog Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pre-training model.
How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning
noise terms into the conversation process, thereby constructing a structural causal model (SCM).
Context-Dependent Embedding Utterance Representations for Emotion Recognition in Conversations
The usual approach to model the conversational context has been to produce context-independent representations of each utterance and subsequently perform contextual modeling of these.
EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversation
Emotion Recognition in Conversation (ERC) has attracted growing attention in recent years as a result of the advancement and implementation of human-computer interface technologies.
Multivariate, Multi-Frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation
Yet, previous works tend to encode multimodal and contextual relationships in a loosely-coupled manner, which may harm relationship modelling.
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition
Multimodal sentiment analysis (MSA) and emotion recognition in conversation (ERC) are key research topics for computers to understand human behaviors.