Multimodal Sentiment Analysis
74 papers with code • 5 benchmarks • 7 datasets
Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.
( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )
Libraries
Use these libraries to find Multimodal Sentiment Analysis models and implementationsDatasets
Latest papers with no code
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
Specifically, we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics.
Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space
To address the aforementioned problems, we propose a trustworthy multimodal sentiment ordinal network (TMSON) to improve performance in sentiment analysis.
TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis
Motivated by these insights, we introduce a Text-oriented Cross-Attention Network (TCAN), emphasizing the predominant role of the text modality in MSA.
Towards Multimodal Sentiment Analysis Debiasing via Bias Purification
In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases.
Emoji Driven Crypto Assets Market Reactions
In the burgeoning realm of cryptocurrency, social media platforms like Twitter have become pivotal in influencing market trends and investor sentiments.
Sentiment-enhanced Graph-based Sarcasm Explanation in Dialogue
Although existing studies have achieved great success based on the generative pretrained language model BART, they overlook exploiting the sentiments residing in the utterance, video and audio, which are vital clues for sarcasm explanation.
Toward Robust Multimodal Learning using Multimodal Foundational Models
Recently, CLIP-based multimodal foundational models have demonstrated impressive performance on numerous multimodal tasks by learning the aligned cross-modal semantics of image and text pairs, but the multimodal foundational models are also unable to directly address scenarios involving modality absence.
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge
Sentiment analysis is rapidly advancing by utilizing various data modalities (e. g., text, image).
Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach
Multimodal sentiment analysis aims to identify the emotions expressed by individuals through visual, language, and acoustic cues.
Explainable Multimodal Sentiment Analysis on Bengali Memes
Memes have become a distinctive and effective form of communication in the digital era, attracting online communities and cutting across cultural barriers.