HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition

14 Apr 2023  ·  Soumya Dutta, Sriram Ganapathy ·

Emotion recognition in conversations is challenging due to the multi-modal nature of the emotion expression. We propose a hierarchical cross-attention model (HCAM) approach to multi-modal emotion recognition using a combination of recurrent and co-attention neural network models. The input to the model consists of two modalities, i) audio data, processed through a learnable wav2vec approach and, ii) text data represented using a bidirectional encoder representations from transformers (BERT) model. The audio and text representations are processed using a set of bi-directional recurrent neural network layers with self-attention that converts each utterance in a given conversation to a fixed dimensional embedding. In order to incorporate contextual knowledge and the information across the two modalities, the audio and text embeddings are combined using a co-attention layer that attempts to weigh the utterance level embeddings relevant to the task of emotion recognition. The neural network parameters in the audio layers, text layers as well as the multi-modal co-attention layers, are hierarchically trained for the emotion classification task. We perform experiments on three established datasets namely, IEMOCAP, MELD and CMU-MOSI, where we illustrate that the proposed model improves significantly over other benchmarks and helps achieve state-of-art results on all these datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Emotion Recognition in Conversation CMU-MOSI Audio + Text (Stage III) F1 score 0.858 # 1
Multimodal Emotion Recognition IEMOCAP Audio + Text (Stage III) F1 0.705 # 8
Multimodal Emotion Recognition MELD Audio + Text (Stage III) F1 65.8 # 1
Emotion Recognition in Conversation MELD Audio + Text (Stage III) Weighted-F1 65.8 # 20

Methods


No methods listed for this paper. Add relevant methods here