Self-attention fusion for audiovisual emotion recognition with incomplete data

26 Jan 2022  ·  Kateryna Chumachenko, Alexandros Iosifidis, Moncef Gabbouj ·

In this paper, we consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition. We propose an architecture capable of learning from raw data and describe three variants of it with distinct modality fusion mechanisms. While most of the previous works consider the ideal scenario of presence of both modalities at all times during inference, we evaluate the robustness of the model in the unconstrained settings where one modality is absent or noisy, and propose a method to mitigate these limitations in a form of modality dropout. Most importantly, we find that following this approach not only improves performance drastically under the absence/noisy representations of one modality, but also improves the performance in a standard ideal setting, outperforming the competing methods.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Facial Emotion Recognition RAVDESS Intermediate-Transformer-Fusion, visual branch only Accuracy 74.92% # 1
Emotion Recognition RAVDESS Intermediate-Attention-Fusion Accuracy 81.58% # 2

Methods


No methods listed for this paper. Add relevant methods here