MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis

24 Jan 2022  ·  Georgios Paraskevopoulos, Efthymios Georgiou, Alexandros Potamianos ·

Current deep learning approaches for multimodal fusion rely on bottom-up fusion of high and mid-level latent modality representations (late/mid fusion) or low level sensory inputs (early fusion). Models of human perception highlight the importance of top-down fusion, where high-level representations affect the way sensory inputs are perceived, i.e. cognition affects perception. These top-down interactions are not captured in current deep learning models. In this work we propose a neural architecture that captures top-down cross-modal interactions, using a feedback mechanism in the forward pass during network training. The proposed mechanism extracts high-level representations for each modality and uses these representations to mask the sensory inputs, allowing the model to perform top-down feature masking. We apply the proposed model for multimodal sentiment recognition on CMU-MOSEI. Our method shows consistent improvements over the well established MulT and over our strong late fusion baseline, achieving state-of-the-art results.

PDF Abstract

Datasets


Results from the Paper


Ranked #7 on Multimodal Sentiment Analysis on CMU-MOSEI (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multimodal Sentiment Analysis CMU-MOSEI MMLatch Accuracy 82.4 # 7
MAE 0.7 # 5

Methods


No methods listed for this paper. Add relevant methods here