Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Emotion recognition in conversations is crucial for building empathetic machines.
#2 best model for Emotion Recognition in Conversation on IEMOCAP
Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication.
Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos.
Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.
Previous research in this field has exploited the expressiveness of tensors for multimodal representation.
In this paper, we propose the Gated Multimodal Embedding LSTM with Temporal Attention (GME-LSTM(A)) model that is composed of 2 modules.
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing.
In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis.