Video Sentiment Analysis with Bimodal Information-augmented Multi-Head Attention

3 Mar 2021  ·  Ting Wu, Junjie Peng, Wenqiang Zhang, Huiran Zhang, Chuanshuai Ma, Yansong Huang ·

Humans express feelings or emotions via different channels. Take language as an example, it entails different sentiments under different visual-acoustic contexts. To precisely understand human intentions as well as reduce the misunderstandings caused by ambiguity and sarcasm, we should consider multimodal signals including textual, visual and acoustic signals. The crucial challenge is to fuse different modalities of features for sentiment analysis. To effectively fuse the information carried by different modalities and better predict the sentiments, we design a novel multi-head attention based fusion network, which is inspired by the observations that the interactions between any two pair-wise modalities are different and they do not equally contribute to the final sentiment prediction. By assigning the acoustic-visual, acoustic-textual and visual-textual features with reasonable attention and exploiting a residual structure, we attend to attain the significant features. We conduct extensive experiments on four public multimodal datasets including one in Chinese and three in English. The results show that our approach outperforms the existing methods and can explain the contributions of bimodal interaction in multiple modalities.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods