Gated Mechanism for Attention Based Multimodal Sentiment Analysis

21 Feb 2020  ·  Ayush Kumar, Jithendra Vepa ·

Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs. In this paper, we address three aspects of multimodal sentiment analysis; 1. Cross modal interaction learning, i.e. how multiple modalities contribute to the sentiment, 2. Learning long-term dependencies in multimodal interactions and 3. Fusion of unimodal and cross modal cues. Out of these three, we find that learning cross modal interactions is beneficial for this problem. We perform experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9% and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over current state-of-the-art.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multimodal Sentiment Analysis CMU-MOSEI Proposed: B2 + B4 w/ multimodal fusion Accuracy 81.14 # 9
Multimodal Sentiment Analysis MOSI Proposed: B2 + B4 w/ multimodal fusion Accuracy 83.91% # 5
F1 score 81.17 # 5


No methods listed for this paper. Add relevant methods here