Emotion Recognition in Conversation
40 papers with code • 9 benchmarks • 10 datasets
Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.
We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations.
Emotion detection in conversations is a necessary step for a number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback in live conversations, etc.
Emotion recognition in conversation (ERC) has received much attention, lately, from researchers due to its potential widespread applications in diverse areas, such as health-care, education, and human resources.
The experimental results show that the proposed method outperforms the state-of-the-art methods on several datasets, particularly on document-level datasets.